The Emoti-Chair and MusicViz prototypes were unveiled last Thursday at a special concert in Toronto.
The systems were developed by Ryerson’s Centre for Learning Technologies (CLT) and the Science of Music, Auditory Research and Technology (SMART) lab.
At the concert, deaf and hard-of-hearing people sat in a chair outfitted with voice coils, or speakers that translated each musical note into a high- or low-frequency vibration.
The Emoti-Chair Demo
Through these vibrations, they could experience a broad range of music – from punk to emo.
Vibrations produced by the Emoti-Chair tap into the highly developed sense of touch that many deaf or hearing-impaired people have.
MusicViz, on the other hand, taps into their sense of sight.
This musical visualization software tool, presents sound as a series of images, graphics and colours, which seek to portray the emotional nuances present in the music.
Three years ago the research team began experimenting with new ways of interpreting sound using visualization alone, according to Deborah Fels, director of the Centre for Learning Technologies and associate professor at Ted Rogers School of Information Technology Management.
Their first projects sought to uncover a new way of making closed captioning more effective.
Conventional closed captions only provide text, and deaf viewers miss out on sound effects, music as well as intonations in the speaker’s voices.
By experimenting with colours, graphics, lights and images, Fels soon discovered that seeing is not the best way for a user to connect with emotion and truly understand sound.
Her research team soon found a connection between the way nerve cells in the cochlea (the inner part of the ear) absorb sound frequencies or vibrations, and the way nerves in the skin react to touch.
“So it made sense to use the human cochlea as a model for how to present vibrations to the skin,” said Fels. To the CLT team’s delight this idea seemed to work.
In tandem with Ryerson’s Department of Psychology, the research team uncovered how tactile vibrations are perceived by the brain. This allowed them to communicate emotions inherent in musical notes.
“Obviously the ear can respond to more frequencies than skin. So we reduced the entire set of frequencies to something the skin was able to understand.”
By building a mathematical model, researchers could determine how both high and low frequencies are absorbed by nerves, and build a physical chair to relay musical messages to the deaf and hearing impaired.
The Emoti-Chair works by synching individual notes and sounds with vibrations and rocking motions at different levels of intensity.
The chair was structured to match the way the ear absorbs sound.
The top of the cochlea (the auditory portion of the inner ear) picks up higher frequencies and the bottom, the low frequencies.
So low-frequency speakers, or voice coils, were placed near the legs, while high-frequency speakers were located higher up.
Several products on the market appear similar to the Emoti-Chair in that they also utilize speakers or coils to help deaf people absorb vibrations.
However, their main drawback is they enable users to only absorb low frequency sounds, such as the bass and percussion, Fels said.
High-frequency sounds – such as the melody, voice and harmony – are over-powered by the percussion. So the listener misses out on a holistic experience.
This challenge was faced by Ellen Hibbard, a doctoral student and member of the CLT research team at Ryerson, who has been deaf since birth.
Hibbard said up until now, she was never able to fully experience music, as she couldn’t distinguish between vocals and instruments.
“I needed the Emoti-Chair to bring out the richness and depth of music.”
She compared the music experience available through the Emoti-Chair as opposed to another product — to the difference between watching a movie and reading a book.
Music is about more than the commoditization of sounds, according to Frank Russo, director of the SMART lab and a psychology professor at Ryerson.
“If you ask people what they like about music, they rarely say sound. They usually like the emotions and meaning of the song, more than sound.”
The SMART lab’s research centred on music cognition and the way the brain responds to music, apart from hearing sounds.
“We wanted to know what people actually experience when listening to music,” Russo said.
As all the senses are connected in the brain, the neuro-architecture should be able to network messages between the tactile and visual modes of the brain.
This would allow those who are deaf, and those who aren’t to experience the same feelings at a concert.
Based on the concert reactions, it’s clear this objective has been met, the SMART lab director said. “Reactions [on Thursday] night were incredible. To see deaf people become ecstatic and react so excitedly to a concert was exciting.”
Stephane Vera, a composer involved with CLT and SMART lab for the past two years, played two songs he composed with the assistance of the Emoti-Chair.
“I was the first to see its use as a musical instrument,” he said.
Vera used headphones and white noise to compose music, to figure out what a deaf person might find appealing based on feeling. He would then go back and add the tempo and drums on top of the composition at the end.
The territory is still unchartered, but very rewarding, he said. “To see what I call ‘musical virgins’ be brought to tears while listening to my music is so thrilling. The more feedback I get, the more I want to write.”
Many Emoti-Chair users at the concert wanted to see these chairs at concert venues and become available commercially.
However, that’s still a few years away, Russo said.
“At the moment, the chairs work and could be commercialized with the right investment, but we’re researchers and perfectionists, so there is a lot more tweaking we’d like to do.”