Skip to main content

Biofeedback and the Bard: Alexis Kirke Debuts “Conducting Shakespeare”

To be or not to be? Audiences have thrilled to these immortal lines for centuries, identifying with Hamlet's existential dilemma. William Shakespeare was a master at evoking powerful emotions in people, and now audience members have the chance to influence a performance in turn, via various biofeedback techniques.

This article was published in Scientific American’s former blog network and reflects the views of the author, not necessarily those of Scientific American


To be or not to be? Audiences have thrilled to these immortal lines for centuries, identifying with Hamlet’s existential dilemma. William Shakespeare was a master at evoking powerful emotions in people, and now audience members have the chance to influence a performance in turn, via various biofeedback techniques.

Conducting Shakespeare debuts tonight, May 2, at London’s Victoria and Albert Museum, as part of a celebration of 450 years since the playwright’s birth. (Jen-Luc Piquant points out that actually, given the time difference between London and Los Angeles, where I am posting this, it's likely already made its debut.) It is the brainchild of Alexis Kirke, a permanent research fellow in music at Plymouth University’s Interdisciplinary Center for Computer Music Research (ICCMR), as well as composer in residence for the Plymouth Marine Institute.

Kirke has been creating ingenious multimedia performance pieces that merge art and science for years now, many involving some form of sonification of data. For instance, “Sunlight Symphony” turned an iconic campus building into a music instrument played by the rising sun, while “Insight” saw Kirke (who has a visual condition called palinopsia) simulating his hallucinations via an iPad and converting them into sound in a “duet” with a flautist.


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


He also made a short film, Many Worlds, with four different scripts. Biofeedback readings from the live audience determined which of the four endings the viewers experienced. And he created “Cloud Chamber,” a duet for violin and subatomic particles detected in real time via a working cloud chamber. The camera recorded the tracks of the cosmic rays and converted them into synthesized music, as accompaniment to a violinist.

Kirke found inspiration for his current project by watching YouTube clips of monologues performed by great Shakespearean actors like Patrick Steward and Ian MacKellan. True, Shakespearean diction can be a bit obfuscating at times, “but a good actor can bring that language to life,” he told me when we chatted a week or so ago.

He wanted to bring that same emotional impact to Conducting Shakespeare, and tapped a couple of recent graduates of the prestigious Guildford School of Music and Drama for that purpose: Melanie Heslop and James Mack.

Working with Shakespearean scholar Peter Hinds, Kirke has selected 18 Shakespearean monologues from Romeo and Juliet, Macbeth, Hamlet, and Titus Andronicus for the performance itself, which will be mixed and matched according to the biofeedback he gets from four pre-selected audience members. Those four will be seated in special chairs and fitted with various biosensors to monitor their heart rate, galvanic skin response (GSR, or perspiration), muscle tension, and EEG readings. He plans to set a baseline by playing some pure low frequency sine tones – inaudible at around 32 Hz but which have a calming effect.

While participants will be aware of these added elements, “I don’t want it to feel like a scientific experiment,” says Kirke. “It should be a strong dramatic experience heightened by a little soupcon of science.”

To interpret all that biofeedback, Kirke is drawing on something called a valance/arousal model, a common tool in cognitive psychology for distinguishing between different types of emotional response – something that is notoriously difficult to pin down quantitatively. (Granted, human emotion is rich and varied, spanning far more than a paltry two dimensions, but statistically, arousal and valance are the broadest general categories.)

The arousal axis measures a physiological response in the body associated with a reactive psychological state, using the kinds of biofeedback tools Kirke is employing in Conducting Shakespeare. Kirke’s focus is on arousal because valance is much more difficult to assess – or even to define.

It concerns a value judgment of sorts, namely, how desirable is a given emotion? Happiness and joy, for instance, would both be considered positive emotions on the valance axis, but joy is more intense. So joy would be a positive high valance, while happiness would be positive low valance. Valance is usually measured by supplementing biofeedback with other observational methods, like body language or facial expressions – methods that are quite vulnerable to misinterpretation.

On one level, Conducting Shakespeare is just the latest in Kirke’s many quirky artistic endeavors, but it could end up feeding back into his research as well. For starters, the readings will hone the accuracy of the arousal measurement system he has built, and any insight gleaned into how human emotion works could enhance his continuing work on artificial intelligence – specifically in the area of affective computing, whereby a machine could be designed to detect and respond to a user’s emotional state. Think of it as a kind of simulated empathy.

To further that end, he and his colleagues are developing a biofeedback machine learning algorithm system for musical composition. “It’s kind of like musical pill-popping in the sense that we know how much the sound of music can affect people’s emotions and state of mind,” he says. The idea is to poke people with sound, measure their brain response and use machine learning algorithms to determine which generative noises or music most strongly affect emotional response. Using this data, “You could devise an emotional map of someone’s mind based on their response.”

Nor would this be the first time Kirke’s art has informed his science. A few years ago, he recruited a saxophonist to play a duet with live AI humpback whales, mimicking the songs of the creatures in the wild. It was performed at the Plymouth Marine Institute and soon caught the attention of scientists at St. Andrews. There is now a bona fide scientific research collaboration to use that data for further study.

That said, “I think you can spend forever analyzing the properties of performance tools,” says Kirke. “I want to get my hands dirty, so to speak, and actually use them in performance. It’s developing the models that really excites me.” Expect to see Kirke continuing to blur the lines between his art and his science -- which means a good show for the rest of us.