Stefan Du Randt breaks down the mixing of DOBBY’s “Ancestor” in Dolby Atmos Music and highlights the tricks and techniques to mixing a track in spatial audio.
From surrounding to immersive, Dolby Atmos is the next step in the evolution of spatial audio. Dolby Atmos Music takes your listening experience beyond the ordinary and puts you “inside” of every song, giving you unparalleled clarity for every detail of the music.
With more access to this technology than ever before, there’s never been a better time to take your listeners on a cinematic journey with Dolby Atmos Music. Ready to take your music to the next level?
The way we listen to audio is changing – it’s becoming bigger, bolder and more immersive than ever before.
While the Dolby Atmos logo can now be found practically everywhere (on your TV, in your local cinema, even on your phone), have you ever wondered what makes it special? How is it different from surround sound? And how does it work in comparison to the familiar stereo format?
In this blog post we’ll answer all of those questions and more, exploring the history of audio playback and all of the exciting things Dolby Atmos is bringing to the table.
All About Stereo
Early Gramophone Patent. United States Patent Office, Washington, D.C, 1895 (source)
Taking a quick look into the history of sound reproduction, we can see four main steps leading up to the creation of Dolby Atmos.
We began in ‘mono’ – a single channel recorded with a single microphone.
Around the 1930s, stereo audio began to appear. This type of audio can be recorded with two microphones positioned around the sound source (a guitar or piano are common examples) with the signals from each microphone assigned to either the left or right channel. The sound reaches each microphone with slight differences in timing and frequency creating the illusion of width and space when we listen back on stereo speakers.
A stereo listening setup involves two speakers. When a stereo track is played, an imaginary 1-dimensional ‘sound field’ is created between the speakers. To hear the most convincing ‘sound field’, you’ll either need to use headphones or stay equally distant from the left and right speakers.
We can move the position of a sound in between the left and right channels by decreasing either side’s signal level – this is called ‘panning’. A louder signal on the left side will move the sound towards the left and vice versa. We can also use mixing tools like EQ, dynamic control and reverb to give the illusion that sounds are closer or further away. Still, they remain trapped in the 1-dimensional sound field between the speakers.
Adding Other Dimensions
5.1 Surround Sound Setup
The next step after stereo was to add another dimension to our listening setup. A conventional surround sound format is described as either 5.1 or 7.1, meaning 5 or 7 speakers surrounding you at ear level (plus an added subwoofer, or the .1). This creates a 2-dimensional sound field where we can move sounds front-to-back as well as left-to-right.
5.1 is the most common surround sound speaker layout and is usually what you’ll find in a home cinema. It consists of centre, left and right speakers in front of the listener, plus surround left and right speakers slightly behind the listener. With this layout, we can pan sounds not just between a left and right speaker, but between any combination of the 5.
A 7.1 system uses 4 surround speakers, allowing us to split up the rear and side sound effects. In this layout, the side speakers are positioned at about 90 degrees to the listener, while the rear speakers sit behind.
These two layouts can be scaled up for commercial use. In a commercial surround sound cinema, for example, there will be multiple speakers in each position to account for the larger audience.
Expanding on this surround setup even more we can add either 2 or 4 height channels (written as .2 or .4) above the listener to reach the final step in our journey: a 3-dimensional sound field. With setups like these (such as 7.1.2), you become immersed in audio travelling front-to-back, left-to-right and up-and-down. Combining these makes for endless directional possibilities and adds a whole new creative dimension to the art of audio mixing.
All of these surround sound systems share one similar goal: to reproduce audio in a way that replicates how we hear in real life. It’s almost as if the sound is turned into a physical object within the space…
Channel-Based vs. Object-Based
Conventional stereo or surround formats are channel-based, meaning individual tracks in a mix are routed to a single stereo or surround output channel. A pan control on each track determines which speaker(s) the signal is sent to, whether it be left, right, back left, etc. In this format, the mix is committed to a specific number of channels, meaning that in order to listen to the mix, you need a playback device which is optimised for that type of mix and has the right number of speakers.
An object-based system like Dolby Atmos removes this restriction. Instead of panning a sound between a fixed number of channels, Dolby Atmos can store the position as metadata similar to X,Y and Z coordinates in the 3 dimensional sound field. When mixing, this metadata along with the audio for that track are sent separately to the Dolby Atmos rendering software. They are then re-combined to make an ‘object’.
However, the renderer software is not entirely object-based. You can also use it like a conventional channel-based system. This means that you can route some of your tracks to a surround output bus (like 7.1.2) and the surround panning position is baked into the signal rather than stored separately as metadata. These specific channels are referred to as the ‘bed’ in the Dolby Atmos Renderer.
Which should we use, object or bed? It’s easier to use a ‘bed’ for signals that won’t move around the 3D space, or those recorded in stereo or surround (with 2 or more microphones). Only tracks that are routed as a bed can be sent to the LFE channel, so that means any bass-heavy sounds should use a bed.
Objects are better for providing a really precise spatial location, or for signals that are going to move around. Objects can only have one audio signal, so multi-signal recordings like stereo would need multiple objects.
The vital part of Dolby Atmos is its renderer. With the renderer, the finished Dolby Atmos mix can be played back on systems with any speaker layout: stereo, 5.1, 7.1.2 etc. The renderer turns the signals into a channel-based output which fits the speaker layout it’s about to be played on.
Of course, this means that the more speaker channels you have available, the more accurate and precise the 3D sound field will be.
What About Headphones?
Stereo has always been our preferred listening format for music. Whether this means a pair of speakers in your home, at a live music venue, or on-the-go with your phone and a pair of headphones. But how can we make immersive audio with just a standard pair of headphones?
You may be familiar with binaural audio. This involves a recording technique where microphones are placed in a mannequin head to record a sound as if they were human ears. When we listen back on headphones, it’s as if we are inside a 3D sound field reconstruction of the recording location.
Our ears can detect the position of a sound by comparing volume, frequency content and timing differences between the sound in each ear. These differences are created by the physical distance between your ears and the shape of your head or ‘head shadow’. You can artificially recreate this by applying the same principals to an audio signal, a technique called binaural rendering.
Binaural rendering uses HRTF (Head Related Transfer Function) algorithms. It creates a virtual human head based on the average head shape and uses this to process the signal. Unfortunately, this means that the further away from the average shape you are, the less realistic the 3D binaural experience will be.
Measuring your personalised HRTF previously required measuring your head shape with complex technology in a sound-proofed room – not very easy to access. However, the release of iOS 16 this month has made ‘Personalised Spatial Audio’ available to iPhone users. To use it, you’ll need one of the more recent AirPods models plus an iPhone with iOS 16 and a ‘TrueDepth’ camera. The phone can then scan your face and ears in order to optimise the audio output for your unique facial profile.
As well as various speaker systems, Dolby Atmos mixes can also be rendered to binaural audio. This is the vital feature which unlocks the world of Dolby Atmos music for average listeners using conventional headphone or stereo setups. Apple Music’s Spatial Audio with support for Dolby Atmos uses a similar system. Apple Music can now play Dolby Atmos tracks on all AirPods or Apple headphones, plus their latest devices with the right built-in speakers.
Dynamic head tracking is another important element in Apple Music’s Spatial Audio. This involves monitoring the position of your head and adjusting the audio so it appears to stay in the same place as you move. This enhances music-listening by not only recreating a live music experience but also allowing for our natural head movements when listening to sound.
Is Dolby Atmos the future of music?
At first it seemed unlikely, but with all of these developments towards integrating Dolby Atmos into every listening device and setup we use, the world of immersive audio is effortlessly establishing itself in our everyday lives just as stereo once did.
Dolby Atmos Music is the latest in surround sound technology from Dolby Laboratories. With Dolby Atmos (also referred to as “Spatial Audio” on Apple Music) you can experience an immersive auditory environment while watching a film or TV show, playing a video game or listening to your favourite music.
While originally developed for film – launching in LA’s Dolby Theatre for the 2012 premiere of Disney animation Brave – Dolby Atmos is now gaining a lot of attention in the music world with the release of Apple’s Spatial Audio, allowing Apple Music listeners to experience immersive audio right from their headphones and compatible Apple devices.
So how does Dolby Atmos Music differ from the surround sound systems that we are used to? There are two key elements that define Dolby Atmos Music:
Height channels. In a typical surround sound setup, you have a circle of either 5 or 7 channels in front, to the sides, and behind you. Dolby Atmos adds channels from the ceiling as well, meaning sound can appear from above and all around you, creating a virtual 3D space.
Object-based audio. With typical surround sound we use channel-based audio, where audio is mixed for a specific speaker setup (e.g. a 7-channel surround sound). Dolby Atmos instead uses coordinates in a virtual space to map out different discretely placed sounds, meaning the mix can be played back on almost any type of setup, from headphones to a cinema!
What is the difference between “channels” and “speakers”?
To better understand Dolby Atmos, we first need to understand the difference between channels and speakers.
A typical 7.1.4 Dolby Atmos Speaker Setup via dolby.com
In a small setup such as a home cinema, you might only have one speaker per channel – three at the front (left, right, centre), two to the sides (left mid surround and right mid surround) and two behind (left surround and right surround). Scale this up to a commercial cinema filled with hundreds of people and you’ll probably need more than one speaker per channel, especially along the sides. You might have 6 speakers spread along the left wall, so if a sound is sent to the left, it will play out at equal volume from all 6 of these speakers.
A Dolby Atmos system adds finer detail to this. It figures out how many speakers there are, and can then control each of them independently to move a sound around the space in an incredibly realistic way. The benefit of object-based audio is whether you have 5, 7 or 128 speakers around the listener, the format is completely scalable, meaning the instrument or effect you have moving around the virtual 3D space will be replicated perfectly across all Dolby Atmos setups.
A Dolby Atmos setup can be as simple as 2 speakers and a subwoofer. via dolby.com
More complex setup with 11 speakers around the listener 11.1.8 Dolby Atmos Setup. via dolby.com
History of Dolby Atmos
Debuting in 2012 with Disney Pixar’s “Brave” as the first film with spatial audio means that Dolby Atmos has been around for a decade now! This is the latest audio innovation from Dolby Laboratories, with American engineer Ray Dolby introducing surround sound to cinemas shortly after he founded the company.
Surround sound began with the 5.1 setup – 5 surrounding channels plus a subwoofer (also called an LFE or Low Frequency Effects). This was followed by the 7.1 setup, adding two more channels behind the listener. Then Dolby Atmos arrived, adding 2 to 4 height channels on the ceiling, creating the possibility of 5.1.2, 5.1.4, 7.1.2 and 7.1.4 speaker setups.
Both surround sound and Dolby Atmos have largely been saved for the cinema or your recording studio. If you’re lucky enough to have a home cinema, you may have enjoyed these experiences without having to travel. Now, Dolby Atmos is more accessible than ever, with a number of streaming services and playback devices replicating the immersive experience of a studio or cinema wherever you are.
Can I Listen to Dolby Atmos in my Headphones?
The short answer is yes! Even with just two headphone speakers, you can experience immersive audio. So why are there Dolby Atmos-enabled headphones if any pair of headphones can play Dolby Atmos?
While most of the processing is done on your playback device and any pair of headphones will work for the binaural version of the mix, Spatial Audio headphones often come with multiple drivers and may also include additional sensors that enable dynamic head-tracking. This allows you to look around the 3D space as you move your head. For example, if a keyboard is set to be on the right of the listener, by turning your head to the right you will be able to hear that keyboard in front of you (as if you were looking at it) and everything that was previously in front of you is now coming from your left.
Why Mix in Dolby Atmos Music?
The ability to listen to Dolby Atmos Music mixes on virtually any device is an exciting step forward for musicians and producers. Spatial audio is now no longer limited to those with complex speaker setups or cinema technology – it can be accessed by the everyday music fan.
TIDAL and Amazon Music both added support for Dolby Atmos Music in 2019. This was followed by Apple Music, who announced their ‘Spatial Audio with support for Dolby Atmos’ in June 2021.
The main benefit of producing and mixing in Dolby Atmos Music is the new level of freedom. You’ll have the opportunity to use a new dimension of creativity when it comes to sound placement. Studios 301 Dolby Atmos engineer Stefan Du Randt explains,
“It really is the future of music. The format can make your mixes feel cinematic and immersive, almost like you’re watching the story of the song unfold.”
Stefan Du Randt
Another benefit is the ability to create more separation between sounds by adding physical space between them. A busy mix can be organised with instruments above, behind and beside you so they can all be heard clearly. This also gives you the opportunity to create even larger mixes, packing a huge range of sounds into one mix without losing sight of any of them. The format also allows you to have more control over mapping effects. If you want a sweeping sound to travel from behind the listener into the central speaker in front of them, you can do that with Dolby Atmos Music.
How can I release my music in Spatial Audio?
If you’re ready to make your music as immersive and exciting as possible, then you’re ready for Dolby Atmos Music!
Before you book a session, make sure you have the following:
Final, signed off stereo master file (remember that stereo and Dolby Atmos Music are two separate formats. In order to fulfil a Dolby Atmos mix, we require the finished stereo master. This also ensures that the Dolby Atmos mix will match the vibe and loudness of the stereo version. If you don’t have a stereo mix yet, you can book in a “Full Mix” which includes Stereo and Dolby Atmos)
RØDE Microphones ran a session in Studio 1 with Jack Prest for one of their endorsed artists. Jack recorded ‘Battle Ax’ – which is an experimental classical/fusion viola player with the assistance of RØDE microphones and their technicians.
A$AP Twelvyy recorded for 3 days with Tom Garnett from the Warner tenancy room (Studio 8). A$AP was on tour here for a few weeks and brought along Kid Laroi to track vocals on a collaboration.
MusicNSW and 301 hosted the Levels Masterclass series in the studios on the 18th of May. This featured 4 x studios with over 50 students working across songwriting, production and mixing techniques with Milan Ring, Mookhi, Sparrows and Rebel Yell.
SIMA and ABC Classics hosted a live album recording for Julien Wilson‘s jazz quartet in Studio 1. There were over 110+ in attendance, with Owen Butcher facilitating a live record and stream to ABC radio.
“Thank you so much for a seamlessly successful event for our Sydney Symphony Vanguard members program. I was so impressed by your professionalism, friendliness and accommodation of all of our requests. The event was well staffed and the team went out of their way to make us feel at home. […] It was a huge honour to hold an event in such an iconic space and we are so grateful for your hospitality at all stages of event planning.”
Leon Zervos has been working on new releases for The Veronicas, Jess Mauboy, Stan Walker, Jungle Giants, Montaigne, Slum Sociable, Cyrus, Sahara Beck, JEFFE, Fergus James and Dawn Avenue (Mexico).
Steve Smart has mastered music for Dean Lewis, Vance Joy, Spookyland, No Frills Twins, Oh Reach, Lakyn, RedHook, Abi Tucker, Danielle Spencer, Dande and the Lion, PLANET, and Ivey.
Andrew Edgson has worked on tracks for The Lulu Raes, The Laurels, Yeevs, Foreign Architects, Merpire, Black Aces, The Paddy Cakes, Noah Dillon, Jack Botts and Fatin Husna (Malaysia).
Ben Feggans has been mastering for Luboku, Oh My My, Emma Hewitt, Love Deluxe, Nick Cunningham and remixes for Alison Wonderland and Owl Eyes.
Congratulations to all of this year’s ARIA nominees, with a special shout out to our incredible clients and their teams that have been nominated including Amy Shark, Esoterik, Adam Eckersley & Brooke McClymont, Jessica Mauboy and Jimmy Barnes. Additional kudos to our Studios 301 team who have worked on their releases.
Our Nominated Clients And Works As Follows:
https://studios301.com/our-work/love-monster/
Apple Music Album Of The Year / Best Female Artist / Best Pop Release Wonderlick Recording Company Mastered by Leon Zervos
https://studios301.com/our-work/my-astral-plane/
Best Urban Release Flight Deck/Mushroom Group Mastered by Leon Zervos
https://studios301.com/our-work/adam-brooke/
Best Country Album Lost Highway Australia / Universal Music Australia Mastered by Leon Zervos “Be like you” and “Awake” vocal engineered by Stefan Du Randt
Best Original Soundtrack or Musical Theatre Cast Album “Texas Girl At the Funeral of Her Father”recorded at Studios 301 Engineered by Owen Butcher, Assistant Engineer Tom Garnett
August was our biggest month this year in the studios!
Katy Perry at Studios 301 Sydney
Zedd at Studios 301 Sydney
We were visited by global pop sensation Katy Perry and renowned DJ/Producer Zedd on the Australian leg of Katy’s ‘Witness World Tour.’ Katyand Zedd locked out Studios 1 and 2 for 10 days, recording some new material and working with our senior engineer, Simon Cohen and assistant team. Both artists got to hang out with our new studio puppy, what a treat!
While on tour, Katy Perry’s band members Tony Royster Jr & Chris Paytonhit a midnight session with MXXWLL and Deutsch Duke. Our engineers Stefan Du Randt and Jack Garzonio say it’s one of the best sessions they’ve ever been a part of.
US R&B artist Pleasure P (Pretty Ricky) hit the studio with producer Willstah to work on music for the upcoming season of VH1 TV series Love & Hip Hop.
Guy Sebastian and Jess Mauboy hung out in the studio with over 10 local and international songwriters and producers for a 4 day writing camp. Other writers and artists included Graace, The Orphanage, Thief, Tushar and JOY.
Jess Mauboy
Delta Goodrem with studio puppy @sircharlesbarkley_
Australian songstress Delta Goodrem locked out our flagship room studio 1 for 3 days, bringing her whole band for a jam session! Delta invited some lucky fans to come and watch her rehearse in the studio, and laid down some tracks for a new release with our engineer Stefan Du Randt.
David Campbell and Chong Lim have been busy recording a project with our very own engineer, Jack Prest. Stay tuned for more updates on this new project.
ARIA Award Winning singer-songwriters Amy Shark and Samantha Jade spent the day writing and collaborating in Studio 1, with the help of our engineer Jack Garzonio.
Leon Zervos mastered Amy’s most recent album ‘Love Monster’ which debuted at #1 on the ARIA charts.
Amy Shark and Samantha Jade
Masterclasses
Anna Laverty Masterclass
August saw the launch of our masterclass brand with two great sessions. Anna Laverty and Simon Cohen ran masterclasses on production and mixing, both of which sold out within days. We had a huge waiting list of applicants eager to attend, so due to popular demand we will be running a series of future masterclasses.
Steve Smart mastered soundtracks for the original animated Netflix series Beat Bugs 2 and Motown Magic, featuring music from The Beatles & Motown catalogue. He also mastered live albums for Gang of Youths MTV Unplugged, and Paul Kelly Live at the Opera House.
Leon Zervos has been working on releases for Starley (Central Station Records), GLADES (Warner), Owl Eyes (Liberation), ALTA (Soothsayer), Harper Finn (NZ) and Cyrus (Sony).
Andrew Edgson mastered music for The Kite String Tangle (Warner) and Thelma Plum (Warner)
Ben Feggans worked on tracks for SAATSUMA (Grenadilla Sounds) and Jordi Ireland (Casablanca Records).