10 Superpowers of Sound
Our brains possess amazing abilities we don’t always realize we have, and a good number...
The History of Immersive Sound
Sound spatialization is regularly considered as the future of our immersive experiences. But is it really that futuristic?
A human being perceives a multitude of sounds, coming from all directions. Even when listening to a person speak in a quiet room, we hear hundreds of echoes generated by the configuration of the space. This is what makes our experience immersive.
When we artificially reproduce recorded sounds, the challenge is to convince our brain that we are in a ‘believable’ environment. That is to say, the experience becomes immersive when we feel that we are receiving sounds in the usual way. The listening process becomes natural when the sound environment is truly 3D, and gives the impression of receiving information from all directions.
So it’s all about copying what already exists, but how did we arrive at that conclusion?
Creating the Spatialization of Sound by Listening to Ourselves
Before we can even imagine any kind of sound, we need to understand how humans perceive it. The process of sound design and production is therefore profoundly human; it is about copying the behavior of our ears, of our brain, to create customized technological tools.
As Antoine Petroff, Director of Immersive Products at IRCAM amplify, explains, humans perceive sounds and noises all the time. It is at the very foundation of our species: sounds allow us to react to our environment. Having two ears, we perceive sounds differently depending on our position in relation to the origin of the sound. This means that we hear different sounds, from different positions, all the time.
Historically, cinema took the first steps towards sound spatialization. Disney is commonly mentioned as being among the first to experiment with spatialized sound. In 1940 the film Fantasia was released, together with its multi-speaker sound installation: Fantasound. Then a few years later, it was the filmmaker Henry Jacobs who took the experimentation a step further with his Vortex Concerts project. In a planetarium whose dome is equipped with speakers, he managed to diffuse a sound in such a way as to make it rotate from speaker to speaker. Slowly, research into immersive sound was being developed for a better immersive experience.
Technically, researchers came up against a major problem: how to ensure that the sound comes from different places? How to give the brain the illusion of natural sound by creating several sound sources? This led to the development of binaural sound. The engineers Plenge, Kürer and Wilkens started their experiments in the late 1960s to capture sound from all sides. The tools were designed to look like a human head (« dummy heads »). It was by reproducing the human anatomy that they began to record in a completely new way.
At the Forefront of Immersive Sound
From the 1970s onwards, new experiments were undertaken. First with the Acousmonium, developed by François Bayle, which took the form of an orchestra of speakers. This was the first time that phonographic projection was used, ideally accompanied by light effects. The Acousmonium furthers the evolution of acousmatic music, which seeks to bring music to center stage.
One thing led to another, and compositions conceived with several different channels of sound were born (the first being Karlheinz Stockhausen’s “Kontakte », conceived for the quadraphonic format). Ambisonic sound was developed in 1975, using 360° recording to map the sound and broadcast it on any medium. This was a major development, making it possible to record on four channels. In the 1990s, research evolved to include new formats (such as HOA) allowing the use of a greater number of channels (both in capture and playback).
Immersive sound is also used in so-called mainstream music. Pink Floyd was already playing concerts with a custom-made quadraphonic speaker system in the 1960s, and the singer Björk was among the first artists to use multichannel devices during her concerts. Jean-Michel Jarre also used a multichannel system during his concert in the Egyptian desert near Giza. The focus is on the listener, who experiences new emotions through sound.
Increasingly Accessible Sound Experiences
The technology eventually caught up with concepts that were waiting to be built on a solid foundation. This starts with the development of Wave field synthesis (WFS) in the mid-2000s. In 2012, IRCAM’s projection room (the EsrPro) was equipped with a new multi-channel sound spatialization system combining WFS and surround sound systems. Today, you can experience these collective immersive experiences right in the heart of Paris. These sound experiences are becoming more accessible to the public.
IRCAM is also the originator of Spat software, a technology that makes it possible to artificially recreate the movements and reverberation of sound. It is now possible, from a laptop, to spatialize sounds. From the user’s perspective, the integration of immersive sound in listening devices allows it to be democratized: improved headphones, connected devices, sound bars, etc. Users discover a new paradigm in listening to music, more accessible than before.
Finally, in the early 2010s, Dolby Atmos sound makes its debut with the movie Brave by Pixar. From now on, stereo or spatialized sound can be easily played through several speakers, or through headphones. Spatialized sound is becoming more accessible, it is even available on streaming platforms, accessible to users without any need for high-end equipment.
What Next for Sound Spatialization?
Immersive sound has a bright future: public demand is increasingly strong, and sound quality promises great experiences sought after by audio content creators. But it’s also driven by major manufacturers, who are pushing the development of technologies to enhance the possibilities of immersive sound. We are at a defining moment: sound, too long neglected, is finally taking its rightful place at the core of our immersive experiences.
– Mathilde Neu and Antoine Petroff
Think we're on the same wavelenght?