12 December 2014
Written by Professor Peter McOwan,
Vice-Principal for Public Engagement and Student Enterprise
All researchers are passionate about the subject they study, without that drive to answer previously unanswered questions we wouldn’t get out of bed in the morning. I’m passionate about my research, but I’m also passionate about involving others in it, finding out their thoughts, concerns and views, and explaining what and why I do what I do. I think this makes me a better researcher; it certainly makes me a better teacher and communicator.
Involving the public in research, like all good things, does take time and effort, but the payback can be invaluable. Answers to questions can come from anywhere if you listen hard enough, and my viewpoint can and should change as I discover more.
An example of this is the recent work that Howard Williams, my PhD student, and I undertook on developing an artificial intelligence system that creates new forms of magic tricks. At the core of it, however clever our computers mathematical technique, if the trick wasn’t seen as magical it failed, and the only real way to do that was to work with people to find out what they thought of the effects.
The work involved mass participation online experiments, as well as going out to science festivals and arts fairs to test new variations of the computer generated tricks. From these we would gather feedback from the audience, discussing their take on the trick, and how the work could in future extend beyond simply magic tricks, but into giving us a better understanding of how human brains work. Every time we came away with information to help us refine the method, sometimes even to throw away ideas that we thought were great in the lab, but the audience didn’t. At the end however, we had audience tested tricks that really worked, and worked well. Without engaging with the public that outcome would have been impossible. Without the online experiments and the data collected from hundreds of citizen scientists we would not have discovered the limits of length changes that we could safely use in our magic vanishing pattern jigsaw, or the playing cards they noticed or favoured most and least. And without working with Davenports magic shop in London, to sell the final product to real magicians we would never have known if the research actually produced magical effects that magicians would want to use in their acts.
Like all other researchers we of course wrote the work up for publication in a high profile journal, interestingly that’s often the only route to impact or dissemination I see stated in grants I review, and published it was. The review process was made all the easier because we could argue strongly this had been a real co-creation with audiences and had real impact with magicians; after all they had bought it in a shop not knowing it was computer generated, a sort of magician’s Turing test.
When the paper was published we took the effort to get involved in press work to promote the ideas, and that’s always hard work as journalists ask a lot of questions, expecting answers straight away because of their deadlines. Having to work this almost instant response around the day to day work I had already was challenging, it would have been easier to say no, but every article meant that more people out there learned something new, and saw that A.I. wasn’t going to take over the world or anything sinister, and that it could be used as a tool to help creative folk like magicians. Who knows it might even inspire someone out there to become a computer scientist or take up magic as a hobby, both good outcomes in my book.
Oh and next REF… hopefully all those other worldwide researchers who contacted me saying how interesting the work was will help with the citation count too.
Read the paper ‘Magic in the machine: a computational magician's assistant' by Howard Williams and Peter McOwan here.
Written by Peter McOwan
Vice-Principal for Public Engagement and Student Enterprise,
Co-Director of Computer Science for Fun