Stefan Du Randt breaks down the mixing of DOBBY’s “Ancestor” in Dolby Atmos Music and highlights the tricks and techniques to mixing a track in spatial audio.
From surrounding to immersive, Dolby Atmos is the next step in the evolution of spatial audio. Dolby Atmos Music takes your listening experience beyond the ordinary and puts you “inside” of every song, giving you unparalleled clarity for every detail of the music.
With more access to this technology than ever before, there’s never been a better time to take your listeners on a cinematic journey with Dolby Atmos Music. Ready to take your music to the next level?
The way we listen to audio has changed dramatically. In 2026, it’s evolving faster than ever. Dolby Atmos is now standard on Apple Music and Amazon Music, Google and Samsung have launched the open-source Eclipsa Audio format, and the Grammy Awards have recognised immersive audio as a standalone category since 2019.
But what actually makes Dolby Atmos different from traditional surround sound? And how does it compare to the stereo format we’ve used for nearly a century?
In this guide, we break down the key differences between stereo, surround sound and Dolby Atmos, covering how each format works and what it means for artists, producers and listeners. (For a deeper look at how Dolby Atmos Music works specifically, see our companion guide: What is Dolby Atmos Music?)
All About Stereo
Early Gramophone Patent. United States Patent Office, Washington, D.C, 1895 (source)
Taking a quick look into the history of sound reproduction, we can trace four main steps leading up to modern spatial audio.
We began in mono: a single channel recorded with a single microphone.
Around the 1930s, stereo audio began to appear. Pioneered by engineers like Alan Blumlein at EMI (the same company that would later build Abbey Road Studios), stereo uses two microphones positioned around a sound source. The signals from each microphone are assigned to either the left or right channel, and subtle differences in timing and frequency between them create the illusion of width and space when played back.
A stereo listening setup uses two speakers. When a stereo track is played, an imaginary one-dimensional “sound field” is created between them. To hear the most convincing stereo image, you need headphones or a position equally distant from both speakers, often called the “sweet spot.”
We can move the position of a sound between the left and right channels by adjusting each side’s signal level. This is called panning. A louder signal on the left moves the sound towards the left, and vice versa. Mixing tools like EQ, compression and reverb can give the illusion that sounds are closer or further away, but they remain trapped in that one-dimensional field between the speakers.
Stereo remains the dominant format for music today. The vast majority of streaming, vinyl and digital releases are mixed and mastered in stereo.
Surround Sound: Adding Other Dimensions
5.1 Surround Sound Setup
The next step after stereo was to add another dimension. A conventional surround sound format is described as either 5.1 or 7.1, meaning 5 or 7 speakers surrounding you at ear level, plus a subwoofer (the “.1”). This creates a two-dimensional sound field where sounds can move front-to-back as well as left-to-right.
5.1 is the most common surround sound layout and is the standard for home cinema. It consists of centre, left and right speakers in front of the listener, plus surround left and right speakers slightly behind. With this layout, sounds can be panned between any combination of the five speakers.
A 7.1 system adds four surround speakers instead of two, splitting the rear and side channels. The side speakers sit at roughly 90 degrees to the listener, while the rear speakers are positioned behind.
These layouts can be scaled up for commercial cinemas, where multiple speakers per channel account for larger audiences.
Expanding further, we can add height channels (either 2 or 4 speakers above the listener, written as “.2” or “.4”). A 7.1.2 setup, for example, gives us a three-dimensional sound field: audio can travel front-to-back, left-to-right, and up-and-down. This is where surround sound starts to become truly immersive.
All surround formats share one goal: to reproduce audio in a way that replicates how we hear in real life, as if the sound has become a physical object in the room.
Dolby Atmos vs Surround Sound: Channel-Based vs Object-Based Audio
This is where Dolby Atmos fundamentally changes the game.
Stereo and conventional surround formats are channel-based. Individual tracks in a mix are routed to specific output channels (left, right, centre, surround left, and so on). The mix is locked to a specific number of speakers. To hear it correctly, your playback system needs to match.
Dolby Atmos is an object-based system. Instead of panning a sound to a fixed channel, Atmos stores the sound’s position as metadata, similar to X, Y and Z coordinates in a 3D space. When the mix is played back, the Dolby Atmos renderer reads this metadata and translates it to whatever speaker layout is available: stereo, 5.1, 7.1.4, or headphones.
The Atmos renderer isn’t entirely object-based, though. It also supports a conventional channel-based approach. Sounds that won’t move around the 3D space, or recordings made with multiple microphones in stereo or surround, can be routed to a surround output bus. These fixed channels are called the “bed” in Atmos. Only bed channels can send audio to the LFE (subwoofer) channel, so bass-heavy elements typically use the bed.
Objects are better for sounds that need a precise spatial location or that move through the 3D space during playback. Each object carries a single audio signal, so a stereo recording would need two separate objects.
The renderer is what makes Dolby Atmos so versatile. The same mix plays on everything from a pair of earbuds to a 128-speaker cinema, with the renderer automatically adapting the spatial positioning. More speakers means a more precise 3D sound field, but even on headphones, the effect is convincing.
Quick Comparison: Stereo vs Surround vs Dolby Atmos
Stereo
5.1 Surround
7.1 Surround
Dolby Atmos
Speakers
2
6 (5 + sub)
8 (7 + sub)
Flexible (2 to 128+)
Sound field
1D (left/right)
2D (left/right, front/back)
2D (wider rear field)
3D (adds height)
Audio type
Channel-based
Channel-based
Channel-based
Object-based + bed
Height channels
No
No
No
Yes (2 or 4)
Adapts to playback system
No
No
No
Yes (renderer)
Headphone support
Native
Requires downmix
Requires downmix
Binaural rendering
At Studios 301, our engineers work with artists on Dolby Atmos mixing sessions, whether remixing existing stereo tracks or creating immersive mixes from scratch. Our Atmos engineer Stefan Du Randt has mixed Atmos projects across genres, from pop and electronic to classical and film.
What About Headphones?
Stereo has always been our default for music, whether on speakers at home, at a live venue, or through headphones on the go. So how do you experience immersive 3D audio through just two ear speakers?
The answer is binaural rendering. Our ears detect the position of a sound by comparing volume, frequency content and timing differences between each ear. These differences are shaped by the physical distance between your ears and the contours of your head (the “head shadow”). Binaural rendering artificially recreates these differences using HRTF (Head Related Transfer Function) algorithms, which model a virtual head shape to process the audio signal.
The limitation is that HRTFs are based on average head measurements. The further your own head shape differs from the average, the less realistic the 3D effect becomes.
This has improved significantly since 2022. Apple’s Personalised Spatial Audio (available on iPhone and recent AirPods models) uses your phone’s TrueDepth camera to scan your face and ears, generating a custom HRTF profile optimised for your unique anatomy. The result is a noticeably more convincing spatial experience.
Dynamic head tracking takes it further. Sensors in supported headphones monitor the position of your head and adjust the audio so the sound field stays anchored in place as you move. If a guitar is placed to your right in the mix, turning your head right brings it to the centre, just as it would in real life.
Dolby Atmos mixes can be rendered to binaural audio on any headphones. Apple Music’s Spatial Audio, Amazon Music’s 3D Audio and TIDAL’s Dolby Atmos support have made this the primary way most listeners experience immersive music, with no speaker system required.
Spatial Audio in 2026: Where Things Stand Now
The spatial audio landscape has shifted dramatically since Dolby Atmos Music first launched on streaming platforms.
Dolby Atmos remains the dominant immersive music format. Apple Music reported that over 90% of its listeners have tried Spatial Audio, and immersive tracks now account for nearly one-third of all plays. Amazon Music and TIDAL continue to expand their Atmos catalogues, and 85 of the top 100 Billboard artists released music in Dolby Atmos over the past year.
Eclipsa Audio, introduced by Google and Samsung in January 2025, is a new open-source, royalty-free spatial audio format developed through the Alliance for Open Media. Unlike Dolby Atmos (which requires licensing), Eclipsa is free for anyone to create and distribute. Samsung’s 2025 TV and soundbar lineup supports it natively, YouTube accepts Eclipsa Audio uploads, and Google has released free Pro Tools plugins for Eclipsa mixing. It’s early days, but the removal of licensing barriers could make spatial audio accessible to far more independent creators.
Apple quietly unveiled its own spatial audio format, ASAF (Apple Spatial Audio Format), at WWDC 2025, building on its existing Dolby Atmos infrastructure with enhanced head-tracking capabilities.
Spotify, the world’s largest streaming platform, still does not natively support Dolby Atmos or spatial audio as of early 2026. The company has acknowledged it is working on immersive audio features, but no launch date has been confirmed.
The Grammy Awards added the Best Immersive Audio Album category in 2019. Recent winners include Peter Gabriel’s i/o (In-Side Mix) in 2025 and Justin Gray’s Immersed in 2026, demonstrating that the music industry takes spatial audio seriously as a creative format, not just a technical novelty.
Is Dolby Atmos the Future of Music?
In 2022, the question was whether Dolby Atmos would go mainstream. In 2026, the answer is clear: immersive audio is here to stay.
The real question now is which format will dominate. Dolby Atmos has the catalogue, the ecosystem and the artist buy-in. Eclipsa Audio offers an open-source alternative that could accelerate adoption, particularly on Android and YouTube. Apple is building its own proprietary extensions.
For artists and producers, the practical takeaway is straightforward: releasing in Dolby Atmos gives your music access to the fastest-growing segment of music streaming. And with binaural rendering making the experience available to anyone with headphones, the barrier to entry for listeners has essentially disappeared.
Frequently Asked Questions
Is Dolby Atmos the same as surround sound? No. Surround sound is channel-based, meaning audio is mixed for a specific speaker layout like 5.1 or 7.1. Dolby Atmos is object-based, meaning sounds are positioned in a 3D space and the renderer adapts the mix to whatever playback system you have, from headphones to a cinema.
Do I need special speakers for Dolby Atmos? No. Dolby Atmos can be experienced on any headphones through binaural rendering. For a speaker-based experience, a soundbar with Atmos support or a 5.1.2+ speaker setup will deliver the full spatial effect.
Can I listen to Dolby Atmos on Spotify? As of early 2026, Spotify does not natively support Dolby Atmos or spatial audio. Dolby Atmos Music is available on Apple Music, Amazon Music and TIDAL.
What is Eclipsa Audio? Eclipsa Audio is an open-source, royalty-free spatial audio format developed by Google and Samsung through the Alliance for Open Media. It offers similar immersive audio capabilities to Dolby Atmos but without licensing fees.
Is Dolby Atmos worth it for music? Yes, especially for artists seeking to differentiate their releases. Over 90% of Apple Music listeners have tried Spatial Audio, and immersive mixes now account for nearly a third of all plays on the platform. The Grammy Awards have also recognised immersive audio as a standalone category since 2019.
Get Your Music Mixed in Dolby Atmos
Ready to take your music into three dimensions? Studios 301 offers professional Dolby Atmos mixing for artists and labels. Whether you’re creating a new immersive mix from scratch or adapting an existing stereo release, our engineers can help.
Picture your music wrapping around the listener, not just from left and right, but from above, behind, and every direction in between. That’s Dolby Atmos Music.
Over 90% of Apple Music listeners have experienced Spatial Audio, and nearly a third of all plays on the platform are now in Dolby Atmos. Originally developed for cinema in 2012, Dolby Atmos has become the dominant format for immersive music, and it’s no longer a niche technology.
In this guide: how Dolby Atmos Music works, where to listen to it, and how to get your own tracks mixed in Atmos at Studios 301.
How Dolby Atmos Music Works
Dolby Atmos Music differs from traditional surround sound in two fundamental ways:
Height channels. A typical surround setup places 5 or 7 speakers around you at ear level. Dolby Atmos adds speakers overhead, so sound can come from above as well as from all sides, creating a true 3D listening space.
Object-based audio. Traditional surround sound is channel-based: audio is mixed for a fixed speaker layout (such as 5.1 or 7.1). Dolby Atmos uses coordinates in a virtual 3D space to position each sound as a discrete “object.” This means the same mix can be played back on anything from headphones to a 128-speaker cinema. The Dolby Atmos renderer automatically adapts the spatial positioning to the available system.
Dolby Atmos Channels vs. Speakers: How It Scales
A typical 7.1.4 Dolby Atmos Speaker Setup via dolby.com
Understanding the difference between channels and speakers is key to understanding how Dolby Atmos scales.
In a small home cinema, you might have one speaker per channel: three at the front (left, centre, right), two at the sides and two behind. Scale that up to a commercial cinema and you might need six speakers along the left wall alone, all playing the same “left” channel signal.
Dolby Atmos goes further. It can detect how many speakers are available and control each one independently, moving sounds through the space with precision. Whether your setup has five speakers or 128, the positions you set in the mix translate accurately, on any system, every time.
A Dolby Atmos setup can be as simple as 2 speakers and a subwoofer. via dolby.com
More complex setup with 11 speakers around the listener 11.1.8 Dolby Atmos Setup. via dolby.com
A Brief History of Dolby Atmos
Dolby Atmos debuted in 2012 with the premiere of Disney Pixar’s Brave at the Dolby Theatre in Los Angeles. It represented the latest step in a progression from mono, to stereo, to surround sound, to fully immersive 3D audio.
Surround sound began with the 5.1 format: five channels plus a subwoofer (LFE). This was followed by 7.1, which added two more rear channels. Dolby Atmos built on this by introducing 2 to 4 height channels on the ceiling, enabling setups like 5.1.2, 5.1.4, 7.1.2 and 7.1.4.
For years, surround sound and Dolby Atmos were largely confined to cinemas and professional studios. That changed when streaming services and consumer devices began supporting Atmos playback, making immersive audio accessible to anyone with a pair of headphones.
Can I Listen to Dolby Atmos on Headphones?
Yes. For most people, headphones are where they first experience Dolby Atmos Music.
Even with just two ear speakers, immersive audio is possible through binaural rendering. This technique uses algorithms that simulate how sound reaches each ear differently, accounting for direction, distance, and the physical shape of your head. The result is a convincing 3D sound field through ordinary headphones.
Apple’s Personalised Spatial Audio improves on this by using the TrueDepth camera on iPhone to scan your face and ears, generating a custom audio profile tailored to your anatomy. This produces a significantly more realistic spatial experience than generic algorithms.
Dynamic head tracking, available on AirPods Pro, AirPods Max and other compatible headphones, monitors your head position and adjusts the audio in real time. Turn your head to the right and the sound field stays anchored in place, just as it would in a real room.
While any headphones can play the binaural version of a Dolby Atmos mix, Spatial Audio-enabled headphones with multiple drivers and head-tracking sensors deliver the most immersive experience.
Where Can You Listen to Dolby Atmos Music?
As of 2026, Dolby Atmos Music is available on these major streaming platforms:
Apple Music has the largest Atmos music catalogue. Over 90% of Apple Music listeners have tried Spatial Audio, and immersive tracks now account for nearly one-third of all plays. 85 of the top 100 Billboard artists released music in Dolby Atmos in the past year. Available on all Apple devices, plus supported third-party headphones.
Amazon Music Unlimited added Dolby Atmos support in 2019. Available on Echo Studio, Fire TV, compatible soundbars and headphones.
TIDAL supports Dolby Atmos and Sony 360 Reality Audio across its HiFi Plus tier.
Spotify does not currently support Dolby Atmos or spatial audio. The company has acknowledged development work on immersive audio features, but no launch date has been announced.
For the best headphone experience, Apple’s AirPods Pro or AirPods Max with Personalised Spatial Audio and head tracking are currently the benchmark. But any headphones connected to a device with Atmos support will work.
Why Mix Your Music in Dolby Atmos?
The ability to experience Dolby Atmos on virtually any device (from headphones to soundbars to car audio systems) makes it an increasingly important format for artists and producers.
Creative freedom. Dolby Atmos gives you a new dimension of sound placement. Instead of fighting for space in a stereo mix, instruments can be separated physically: above, beside, behind the listener, so every element is heard clearly. Busy mixes can breathe. Sparse arrangements can feel enormous.
“It really is the future of music. The format can make your mixes feel cinematic and immersive, almost like you’re watching the story of the song unfold.”
Stefan Du Randt
If you’re ready to take your music into three dimensions, here’s what you need to get started:
What to prepare before your session:
A final, signed-off stereo master. Stereo and Dolby Atmos are separate formats. We use the finished stereo master as a reference to ensure the Atmos mix matches the vibe and loudness of the stereo version. (If you don’t have a stereo mix yet, you can book a “Full Mix” session that includes both stereo and Dolby Atmos.)
Mix stems at 48kHz / 24-bit. Individual stems (drums, bass, vocals, instruments, etc.) give the Atmos engineer the control needed to position sounds accurately in the 3D space.
What to expect:
An Atmos mixing session at Studios 301 typically takes a few hours per track for a remix from stems. Most professional Dolby Atmos mixes are created in Pro Tools using the Dolby Atmos Production Suite renderer, or in Logic Pro which has native Spatial Audio tools. The calibrated monitoring environment of a purpose-built Atmos room, where you can physically hear how the mix behaves across a full speaker array, is difficult to replicate at home.
Our engineers have delivered Atmos mixes for major label releases and independent artists alike. The process is the same, and so is the attention to detail.
The final deliverable is a Dolby Atmos ADM BWF file, the master format accepted by all major distributors for streaming on Apple Music, Amazon Music and TIDAL.
One common question: if a listener doesn’t have Atmos support, what do they hear? The Dolby Atmos master automatically generates a stereo downmix for standard playback. Your listeners always get something, and Atmos listeners get something better.
Dolby Atmos isn’t the only immersive format anymore. In January 2025, Google and Samsung introduced Eclipsa Audio, an open-source, royalty-free spatial audio format developed through the Alliance for Open Media. Eclipsa removes the licensing barriers that have made Dolby Atmos production costly, and Google has released free Pro Tools plugins for creating Eclipsa content.
Apple also unveiled its own format, ASAF (Apple Spatial Audio Format), at WWDC 2025, extending its Atmos infrastructure with enhanced head-tracking capabilities.
For artists, this growing ecosystem of spatial audio formats is a strong signal: immersive audio is here to stay, and investing in spatial mixing now positions your music for the future, regardless of which format ultimately dominates.
If you’re wondering what this means for your music right now, the questions below are the ones we hear most from artists, producers and labels.
Frequently Asked Questions
What is Dolby Atmos Music? Dolby Atmos Music is an immersive audio format that allows artists and producers to mix sound in three-dimensional space. Unlike stereo (left/right) or surround sound (a fixed ring of speakers), Dolby Atmos uses object-based audio to position sounds above, below, behind and around the listener.
Can I listen to Dolby Atmos on any headphones? Yes. Any headphones can play the binaural version of a Dolby Atmos mix through a compatible streaming service like Apple Music. For the best experience, headphones with Spatial Audio support and head tracking (such as AirPods Pro or AirPods Max) are recommended.
Does Spotify support Dolby Atmos? As of early 2026, Spotify does not support Dolby Atmos or spatial audio. Atmos Music is available on Apple Music, Amazon Music and TIDAL.
How much does Dolby Atmos mixing cost? At Studios 301, Dolby Atmos mixing is priced per track or per album depending on stem complexity and session length. Contact us for a quote; most single-track Atmos mixes are completed in a single session.
What’s the difference between Dolby Atmos and spatial audio? “Spatial audio” is a broad term for any audio technology that creates a three-dimensional sound experience. Dolby Atmos is one specific spatial audio format, and the most widely adopted for music streaming. Apple Music markets its Dolby Atmos support under the “Spatial Audio” brand.
Do I need a Dolby Atmos mix to release on Apple Music? No. Apple Music accepts stereo releases as standard. However, tracks delivered in Dolby Atmos are eligible for featured Spatial Audio playlists and are increasingly favoured by the algorithm. Having both a stereo master and a Dolby Atmos mix gives you the widest potential reach.
Get Your Music Mixed in Dolby Atmos
Ready to make your music immersive? Studios 301 offers professional Dolby Atmos mixing for artists, producers and labels.
Joining the Studios 301 team in 2022, Laura comes from a bookings, communications and events background and started her journey into music writing gig reviews and features whilst working for some of the biggest festivals and clubs in Sydney back in the early 2000s.
With a strong passion for events and the music industry, Laura has worked across a variety of sectors within the music, arts and travel industries and curates local parties for the Sydney Street Dance scene.
A request we frequently receive at the studios is:
“Do you have a copy of my files? My laptop/hard drive died and I don’t have them anymore”.
With the rise of digital audio, computer and cloud-based data storage, we thought it may be helpful to provide some tips on good practice to keep your files safe and accessible for the long term.
It may be a tedious chore, but it is essential you back-up your important files. If the session files for your ground-breaking / genre-bending new music are only stored in one place, i.e. on your laptop, then you’re one tech disaster away from heartbreak. Hard drives have a relatively short lifespan; you should not expect them to last longer than 3 to 5 years, and sometimes much shorter than that. I’ve had drives fail that were mere months old. If you have your files only on the one hard drive that gives out, it can cost many thousands of dollars with a specialist to try and retrieve them.
A great and easy-to-remember concept for this is the “3-2-1 backup rule”, which you can read more about here. To surmise,
You want three copies of your files
On two different storage types
And one offsite backup
A practical example of this is to keep any important session files neatly organised on your computer, have a copy on an external hard drive, burn them to a DVD, and another copy in the cloud. That way you have more than covered the 3-2-1 rule and nothing short of armageddon will keep you from your files.
The next question then is, what should you keep? Well technically you can probably keep everything related to a session. Space is cheap these days so it may be worth holding onto all your files, i.e. the out-takes, the demos, the mixes, the masters. But then it’s also worthwhile considering future proofing your sessions. In 15+ years time it’s highly unlikely that you will be running the same system you are today, and if you try to open the project session you may find it incompatible, as plugins won’t load etc. It’s therefore good practice to render out the multitracks of your mixes into two sets of 32bit floating point WAV files, one set with plugins on and one set with them off, so if in the distant future you want to remix the tracks, you can recall all the elements with minimal fuss.
Key Takeaways:
Hard drives WILL fail so if you only have a single copy of important files you are setting yourself up for heartbreak.
Set aside an afternoon every few months to make sure your backups are up to date and remember to test them.
Be kind to your future self and assume your current projects won’t be compatible in the future so render as much as you can into WAV format.
MusicNSW is back with the 2020 Levels program: a one-on-one audio engineering workshop for women, trans and non-binary applicants. Levels 2020 will be held at Studios 301 on Saturday, 27 June 2020 and you can apply at musicnsw.com
The hour-long workshops will be packed with personalised advice on producing and mixing your track, from world-class audio engineers Antonia Gauci(Will.I.Am, Kesha, DMA’s) and Georgia Collins (Birds of Tokyo, Body Type, Bachelor Pad).
It’s a perfect opportunity to take your sound to the next level. Come prepared with your track session and your questions loaded up, and make the most of this rare learning experience.
We are continuing to dig through the archives to bring you more tasty sounds from the Studio! This week, we’re bringing you a collection of Egyptian Clay Tabla recordings made in Studio 1. It features percussionist Tarek Sawires and was recorded by in-house engineer Jack Prest.
Jack had this to say about the recording:
“The RCA77DX is my go-to on percussion. It gives defined but soft transients and feels reminiscent of 1950’s records, which is my favourite vibe for percussion. I always mic at least a few feet back from the drum so you really capture the full sound of the instrument. For these recordings, I used 2 RCA’s – one in front and one behind the instrument, which I have panned slightly left and right to give the samples a nice 3D quality. I can’t remember exactly, but I’m pretty sure we ran these through the AEA RPQ500 Ribbon preamps, which sound fantastic. If not, it would have been the Neve 88R Console we have in Studio 1. Other than that, these are raw and unprocessed, straight out of the Pro Tools session at 24bit 96kHz.”
You are free to use these samples in any of your recordings – something we will hope will encourage some creativity in these troubling times.
There has been a great deal of discussion about target loudness for streaming services recently, particularly in relation to Spotify. This can be problematic for the mastering process, so let’s break it down from a mastering perspective.
Spotify specifies it ‘volume normalises’ all music on the platform to -14 LUFS (measured by ReplayGain as an approximation of LUFS as specified by ITU 1770), so that users can have a consistent listening experience when jumping between songs on playlists. The function is turned on by default when installing the app. As a result, there’s growing speculation that Spotify-specific masters should be delivered at -14 LUFS.
Generally speaking, current masters in most music genres average around the -10 to -6 LUFS region. If you receive a master at say -9 LUFS, and visit a website like Loudness Penalty, you may worry that your track will be turned down when ingested to Spotify. The issue here is, it’s essentially moot whether Spotify or your mastering engineer turns down your track. However, if you do supply Spotify with a -14 level master, the song will be very quiet for subscribers with loudness normalisation disabled.
Much is made of the loudness wars however I’d argue that with the majority of modern music being made with compression in mind, having an incredibly dynamic master at -14 LUFS will likely sound abnormal by comparison. The most important thing with a master, is that it sounds good within itself.
As an example, take ‘Perfekt Dark’ by Lorn, an electronic artist who uses compression as a sound design technique. Listening to the track on Spotify, with loudness normalisation off, the track has a peak level of -0.1 as we would expect. With loudness normalisation on, the new peak level is -1.8, so the track has been turned down. One might conclude that you could potentially get an extra 1.7dB of range out of the dynamics, however this would require backing off compression/limiting to let peaks through, which may in turn change the tone of texture of the track. The compression is playing a role in keeping the percussion in balance with the synthesizers and bass within the track. If the mastering engineer was mastering to hit a target number instead of using his/her ears to make it sound nice and balanced, the overall mastering would perhaps not be as effective.
Key Takeaways:
Not everyone uses Spotify loudness normalisation
It’s a moving target. Spotify uses -14LUFS as it’s target number, but in the past it was -12, and that number may change again in the future. In fact, Spotify already have plans to change the way they measure -14 LUFS
It’s more important that a master sounds good within itself, than be compromised to hit a number. Let the music dictate how loud and how compressed it should be.
If your master gets turned down, well that’s OK. If it sounds good at a peak of -0.1 it will sound good if it peaks at -2
Following the old adage about life and lemons we are taking advantage of self-isolation by digging back through the archives for some great sounds to share with you. You are free to use these samples in any of your recordings – something we hope will encourage some creativity.
This first pack comes from the recording session for the first album by Sydney based new-jazz innovators Godtet. The session was engineered by Studios 301 in-house producer Jack Prest at our Mitchell Road facility in 2016.
Recorded in the drum booth of Studio 1 with the exposed rock wall for extra slap!
The gear used includes FET 47, Neve 1073 Pre-Amps, Shure SM-57 for the snare, and a pair of Neumann 87s smashing through an 1178 on the rooms. It features Sydney drummer Tully Ryan. We will be bringing you a bunch more sample packs in the coming weeks.
Expect a night of fascinating stories, insights into running your own studio business and lifting the lid off the so called “Black Art” of Mastering.
Here’s an opportunity where you can ask all you ever wanted to know about mastering and how we go about it, and also mingle with some of the most talented crew in the country at the panel event and BBQ provided.
The panel will cover:
how to best set up a new studio business in the current economic climate
best way(s) to take a new business to the next level and get momentum working back in your favour
share the strategies that have allowed him to not only stay in business in a commercial facility, but thrive in it during a time where others are down sizing or getting out of the business all together
the best way to get clients and maintain relationships in an ever-competitive market
the myth of “work / life balance”.
mixing and mastering technical approach and workflow practice(s)
With a continuing commitment to quality, we are excited to announce the recent installation of a Prism Sound MEA-2 Precision Mastering Equalizer into Steve Smart’s Mastering Room.
The MEA – 2 is a four band stereo Mastering EQ with shelves on each band, offering a rich golden silky sound. The Equalizer can easily be switched into M/S or Stereo configuration with the push of a button.
Steve’s other collection of Mastering EQs are (Abbey Road, EMI) TG 12412 and the API 5500, as well as the Amtec (Pultec) Tube Programme Equalizer PEQ-1A (Mastering Pair).
Studios 301 recently hosted the legendary Cold Chisel recording their new album in Studio 1 with producer Kevin Shirley. Our assistant engineer Owen Butcher who was on the sessions gives his recap of the experience and working alongside arguably the biggest act in Australian Rock.
written by Owen Butcher
Despite working at a well known studio it’s not every day you get to work with musicians that qualify for legendary status. You meet a lot of young, exciting and upcoming artists, but bands you grew up listening to on the radio are a different breed. You know all the words and how all the songs go, but as you don’t know them personally you make up your own stories and ideas of how they are as individuals. This can be a bit of a shock when they arrive at the studio as there is usually a certain amount of re-adjustment you need to do. Luckily in the case of Cold Chisel the band members are exactly as you’d imagine. Jimmy is excited and keen to get singing, Ian very thoughtful and considered in what he’s going to play, Don is all about the song and attention to detail, Phil is polite but always up for playing a mean bassline, and Charley is caring and always looking to push the songs to their limits. Kevin Shirley was producer on the record, and he liked to be very hands on with the Pro Tools, the band and the songs.
Fans of the band will be pleased to note that almost everything was tracked live with minimal overdubs. In almost all cases it was everyone in the room playing together. We set them up with Charley out in the large room to give the drums a bit of space, with Phil standing near the drumkit with a baffle in between them. We did this so Phil and Charley could communicate with each other visually through the window in the baffle, but also keep Phil’s headphone mix clear. Ian was also sitting in the same live room with his pedals and amp heads, with the speaker cabinet in a booth. We ended up keeping the booth door open and making what I dubbed a ‘sound corridor’ with baffles and tontine. This isolated the guitar amp enough from the drum mics, but also gave Ian the feedback he needs from the amp on the guitar strings and kept him as close to the drumkit as possible. When playing the upright and grand pianos, Don was in the other booth for isolation purposes. We took the front of the upright piano off to expose all the strings and make the piano less boxy sounding. When we was playing other keyboards he was sitting in the live room with everyone else as we could DI any Wurlitzer and Nord parts. Hammond organ was run through the Lesile cabinet (Don during recording: “You should see what I learned you could do with a Leslie back in the day after carrying it up 4 flights of stairs at the Grafton RSL club!!)”. Jimmy was actually singing in the Control Room. He liked to be near Kevin to discuss ideas, and he sings so loud that the studio monitors don’t cause big enough bleed issues.
Equipment and microphone wise, we used mostly basic microphones for the setup as they’re a straight up rock band (U47 kick drum, 421’s Toms, 57’s on guitars, snare etc), though we did add some Sony High Resolution microphones to the mix as Overheads (Sony C-100) and on the upright piano (Sony ECM-100N) to help give a bit more extended range to the other traditionally less detailed sounding mics. We used a Neumann M149 on vocals because it can take a higher SPL than our other vocal mics. All of these were run through the Neve 88R preamps and EQ, with compression from the 1176 on Vocals, LA2A and Pultec EQP1A on the bass and a touch of Amek 9098 compression on the piano.
After all the main tracking was completed, we finalised the guitar solos for each of the songs with Ian. This was a fun process, where we isolated the amp we were tracking in the booth with the door shut, but split the signal from his amps to another Marshall cabinet which we put near him. Any feedback or FX he could use the Marshall the create them, but they would play out of the amp in the booth. In addition to this, we have Genelec 1031A monitors from Mitchell Rd hung from the ceiling of the line room, so the whole band mix was pumped through that like a PA system as though he was playing at a live concert! This made him at feel more at home during tracking and we all know this produces much better results.
The band were a pleasure to work with. They worked very hard and purposefully throughout, making sure as they went to record what was best for each of the songs to do them justice. I also noted that they can appreciate a nice Whisky or two during any downtime, so they’re always welcome in a studio I’m working in.
August was headlined by legendary Australian rock band Cold Chisel locked out in our flagship space Studio 1 for 18 days to record a full album, produced by Kevin Shirley and assisted by Owen Butcher.
Ricochet Songwriting Camp locked out 3 of our main recording spaces for an all-female/non-binary rap and hip-hop writing camp over the length of a week. The camp featured artists such as Mirrah, KLP, Coda Conduct, Janeva, imbi the girl, Erin Marshall, Zeadala. Other sponsors included Hilltop Hoods, Thundamentals, Urthboy, Hermitude, KLP, Elefant Traks, Dew Process, Native Tongue, Warner Music, Nando’s, Yulli’s Brews, PPCA and Nike. Read more here
EMI were in Studio 2 for 2 days tracking drums with Australian singer-songwriter Odette. The sessions were engineered by six-time Grammy nominated record producer Damian Taylor, and assisted by Jesse Deskovic.
Other sessions included Australian Navy Band, Jess Kent Vocal Recording with Simon Cohen, Redbull 64 Bars recording, AB Original: Briggs and Trials, Jay Tee Hazard, Australian Jazz vocalist Emma Pask, XMPLAlive recording and filming, Safia x Spotify and more.
Leon Zervos mastered new music for Birds of Tokyo, Stan Walker, Samantha Jade, Yorke, Clare Bowditch, Kota Banks, Isaiah, Lila Gold, Tuka, Guy Sebastian, Jack River, Shag Rock, Furnace and the Fundamentals and Jordan Gavet (NZ).
Steve Smart worked on releases for Alex The Astronaut, Washington, Elk Road, Hollow Coves, Cheetah Coats, Jack Gray, NOT A BOYS NAME, Aydan, Wharves, Vast Hill, Dande and The Lion, Casey Barnes, Machine Age, Ivey and a remastering project for Col Nolan.
Andrew Edgson mastered tracks for CLYPSO, Ainsley Farrell, Thomston, darby, Camp 8, SCABZ and Good Lekker.
Ben Fegganshas been mastering music for Mallrat, Keelan Mak, Hype Duo, Nick Cunningham, Diana Rouvas (remixes), Johnny Hunter, micra and Johniepee.
Harvey O’Sullivanworked on releases for The Lazy Eyes, møment and a remix for Billy Davis.
C3 Church hosted a live band recording in Studio 1 for a worship album facilitated by Assistant Creative Minister Ryan Gilpin. The live album recording featured a full band accompanied by an audience of 120 guests and was engineered by Stefan Du Randt and assisted by Jack Garzonio.
RØDE Microphones were in Studio 2 for a product demonstration and shoot out, testing some of their microphones
Warner Music Australia have been utilising Studio 1 for listening party showcase events introducing their newly signed artists
RØDE Microphones ran a session in Studio 1 with Jack Prest for one of their endorsed artists. Jack recorded ‘Battle Ax’ – which is an experimental classical/fusion viola player with the assistance of RØDE microphones and their technicians.
A$AP Twelvyy recorded for 3 days with Tom Garnett from the Warner tenancy room (Studio 8). A$AP was on tour here for a few weeks and brought along Kid Laroi to track vocals on a collaboration.
MusicNSW and 301 hosted the Levels Masterclass series in the studios on the 18th of May. This featured 4 x studios with over 50 students working across songwriting, production and mixing techniques with Milan Ring, Mookhi, Sparrows and Rebel Yell.
SIMA and ABC Classics hosted a live album recording for Julien Wilson‘s jazz quartet in Studio 1. There were over 110+ in attendance, with Owen Butcher facilitating a live record and stream to ABC radio.
“Thank you so much for a seamlessly successful event for our Sydney Symphony Vanguard members program. I was so impressed by your professionalism, friendliness and accommodation of all of our requests. The event was well staffed and the team went out of their way to make us feel at home. […] It was a huge honour to hold an event in such an iconic space and we are so grateful for your hospitality at all stages of event planning.”
Leon Zervos has been working on new releases for The Veronicas, Jess Mauboy, Stan Walker, Jungle Giants, Montaigne, Slum Sociable, Cyrus, Sahara Beck, JEFFE, Fergus James and Dawn Avenue (Mexico).
Steve Smart has mastered music for Dean Lewis, Vance Joy, Spookyland, No Frills Twins, Oh Reach, Lakyn, RedHook, Abi Tucker, Danielle Spencer, Dande and the Lion, PLANET, and Ivey.
Andrew Edgson has worked on tracks for The Lulu Raes, The Laurels, Yeevs, Foreign Architects, Merpire, Black Aces, The Paddy Cakes, Noah Dillon, Jack Botts and Fatin Husna (Malaysia).
Ben Feggans has been mastering for Luboku, Oh My My, Emma Hewitt, Love Deluxe, Nick Cunningham and remixes for Alison Wonderland and Owl Eyes.
I had the opportunity to discuss this topic on a panel at the Fast Forward music technology conference alongside members of the Australian industry and media. From the role of a studio manager, I am able to observe the content creation process from a bird’s eye view. It’s not unusual to have an internationally acclaimed major label artist, an independent songwriter or bedroom producer, and an orchestra under the same roof working simultaneously.
People are quick to assume that record labels have become irrelevant and cumbersome, unable to adjust to the convergence of media spearheaded by streaming platforms.
The truth is, infrastructure and technology has disrupted all industries; music has just been hit a lot harder than most.
Does the rise of independent content creation spawn a threat to the major labels?
Will labels enter into mass acquisition of independent channels in order to beef up their current offerings and widen their spread? Will streaming platforms soon provide label services from under their own banner and create a self-sustaining machine?
The answer to all of these questions is likely yes, but this is only a prediction.
It is evident that times have changed dramatically, and the route to market is easier than ever.
Technological advancements and mass production of recording equipment has enabled artists to bypass the traditional industry gatekeepers and release content instantaneously. Artists revel in the low cost and risk free environment of content creation from their bedrooms and basements, where they can craft their product offering in a timely and decisive manner. This is a far cry from the golden age of the record boom.
Prior to home recording becoming an economic option, there was no other choice than to visit a studio to record music and hope for the support from one of the majors. Labels would screen endless talent and take a punt on “the next big thing” in the hopes they were betting on the right horse. Without the meaningful data and consumer insights we have today, it was the equivalent of betting on a horse without knowing the odds of winning.
Sure, the horse has a history of winning and a talented breeder, but did it actually have the gusto to win?
The labels were the tastemakers, and the few artists who rose to the top were fine wine. Today’s media market is flooded, noisy, and feels like everyone has, well, frankly had too much to drink.
Perhaps within the future exists a self-managed ecosystem of flexibility from both sides of the fence, where short-term artist projects can be economically cycled through distribution channels until something sticks.
If Content is King, then surely Curation is Queen.
One thing is evident; we are consuming more media now than we have in recorded history and this will not slow down any time soon, not to mention the channels of distribution that have drastically changed and will continue to do so.
Labels are just like any other corporation who has had to paddle their way through the digitally disrupted currents, so it’s high time we gave them a break.
I don’t think the question is as simple as “will they sink or swim,” it’s more likely to be “when will the storm cease?”
It’s more important than ever to have strong and diverse role models in the creative industries, especially in music production where women comprise less than 2% of engineers and producers working professionally. The statistics of music university graduates tell a different story however; nearly half of graduates who majored in music in university are female, but by the time they hit the workforce this number drops dramatically. There is a clearly defined disconnect between the entry of young women into creative industries, and the occupational pathways that lead them into professional careers.
Studios 301 is committed to championing diversity and gender equality within the music industry, co-hosting a series of collaborative events aimed at engaging, educating, and motivating young women and non-binary to take up roles in audio engineering and music production. The initiatives, spearheaded by Studio Manager, Shelley Bishop,and Mastering/Tenancies Manager, Lynley White-Smith,who comprise half of the management team at Studios 301, intend to provide an environment of support and encouragement for young professionals.
To celebrate International Women’s Day 2019, Studios 301 and APRA AMCOS have partnered on a national workshop series, aimed at promoting the growth of female and non-binary music producers. Over 100 producers in Sydney alone applied through APRA AMCOS for the full day experience, where industry facilitators walk through production techniques across band recording, pop/contemporary music, and film and TV composition. Presenters include acclaimed, multi-award winning Australian Screen composer and current president of the Australian Guild of Screen Composers, Caitlin Yeo, highly sought-after classically trained screen composer Bryony Marks,multi-talented producer / engineer / musician Antonia Gauci, genre-defying producer Jan Skubiszewski, Ableton expert Jane Hanley, electronic music innovator Eve Klein, and producer / electronic musician stars-on-the-rise Ninajirachi and Mookhi.
Later in the month, homegrown non profit organisation Women in Music Sydney and Studios 301 are collaborating on an immersive studio experience and panel discussion entitled The Recording Process, featuring an in depth look at the recording industry through the eyes of female audio engineers. Panelists include ARIA Awarding winning sound engineer Virginia Read of ABC Studios, Warner Music’s Christina Thiers,Studios 301 producer / audio engineers Antonia Gauciand Tahlia Coleman, and Studio Manager / producer / musician Shelley Bishop moderating.
Studios 301 and industry partners actively seek to improve the hire and retention rates of women in the music industry, and provide mentorship and support for young professionals entering the workforce.
For more information or to follow upcoming events at the studio, subscribe to our newsletter:
Congratulations to all of this year’s ARIA nominees, with a special shout out to our incredible clients and their teams that have been nominated including Amy Shark, Esoterik, Adam Eckersley & Brooke McClymont, Jessica Mauboy and Jimmy Barnes. Additional kudos to our Studios 301 team who have worked on their releases.
Our Nominated Clients And Works As Follows:
https://studios301.com/our-work/love-monster/
Apple Music Album Of The Year / Best Female Artist / Best Pop Release Wonderlick Recording Company Mastered by Leon Zervos
https://studios301.com/our-work/my-astral-plane/
Best Urban Release Flight Deck/Mushroom Group Mastered by Leon Zervos
https://studios301.com/our-work/adam-brooke/
Best Country Album Lost Highway Australia / Universal Music Australia Mastered by Leon Zervos “Be like you” and “Awake” vocal engineered by Stefan Du Randt
Best Original Soundtrack or Musical Theatre Cast Album “Texas Girl At the Funeral of Her Father”recorded at Studios 301 Engineered by Owen Butcher, Assistant Engineer Tom Garnett