Now instead of radio we have streaming, and instead of 78s, well, we have streaming—plus an assortment of all the good old mainstay formats.
Recording has advanced a lot over the past 150 years, and it’s continuing to progress across the board—from formats to listening devices to delivery systems. Sound recordists are at the forefront, adopting new tools and technologies as soon as they become available.
An engineer or producer makes it their business to be on the cutting edge of recording, taking advantage of the latest technologies to meet the expectations of both the performer and the listener. Depending on how they are applied, these advances can contribute to improving the general state of the art of audio. They may also be adopted by music delivery systems, and even become requirements.
In the grand scheme of things, every variable in the recording process is an opportunity to shape a vision for what listening will be like in the future.
The people developing software and recording gear are constantly driving sound engineering forward. It’s not just about commerce—the true believers in the field of audio have taken us from the wax cylinder to high-resolution recording at 192 kHz (192,000 samples per second), pushing the boundaries at every stage to make sound even more interesting and vivid.
Yet, in the context of what’s to come, we are still in a relatively primitive phase.
(We’ll be making a deeper dive into digital music and services in a future issue.)
What is going to move the bar and create a next-level experience? That is a question for everyone along the chain of production of the recording ecosystem—including the artist, producer, engineer, recording facility, software and hardware manufacturers, university researchers, record labels, music marketing people, distributors, streaming services, hard copy manufacturers, playback system manufacturers, and, last of all, the listener.
When something radical changes in the chain of variables that make up the recording process, it registers across the entire system. That’s where we are with 3D recording. Suddenly listening has a new variable, and everyone needs to jump on board or get left behind.
So much is accessible now with playback formats and streaming channels—it seems like the choices are multiplying every year. We can listen to anything, anywhere, on any medium. That level of access has transformed how we think about recordings.
And while it wasn’t something in the consciousness of musicians living in the 19th century, here we are in the age of “What will they think of next?”
As technology races forward, it’s not just about what’s next, but how quickly we get there. Everything we do today seems to be looking ahead toward another phase, so whatever the latest version of anything is, it quickly fades into something new.
We are in a constant state of change. Our digital reality has given birth to an emergent virtual world where the possibilities are limitless and techno-visionaries dream of creating at the speed of thought.
It’s all about the technology, and it’s funny really, to try so hard to duplicate reality when, as we know, it can’t compare to the live music experience, the real thing. But we need to replicate ourselves, to duplicate the work and make it into a product. It’s ultimately for promotion, a kind of calling card. So then, why not make it as cool and future-proof as possible?
There is within us an instinctive motivation to continually innovate. As technology moves forward, we will eventually make our way out into the universe, that ever so fantastic, uncharted ocean in the sky.
It’s more than a curious fascination—it has been part of our insatiable need to know the unknown from the beginning of humankind, when we first had awareness and consciousness: i.e. cogito, ergo sum. We have always been driven to map what lies beyond the confines of our current reality—whether through exploration, discovery, or even the way we capture and preserve sound.
So when you think of recording, you can see that it is ultimately connected to cosmology—and to boldly go…
As for the unknown, I wanted to get the lowdown on Dolby Atmos. So I talked to one of the resident experts using this technology in classical recording, David Bowles. It’s not just for movies, it turns out this is what is happening now in the record biz, so I thought you might want to know also, you know, in case.
What we are looking at as the current state of audio is, in many ways, a mash-up of all the cool retro environments mixed with a glimpse at the future of listening—which, in the not-too-distant future, is a lot of AR (augmented reality), where you are no longer limited by place. Tickets to any event in the world. An experience on demand. Seat belts not required.
I’m guessing that having Metropolitan Opera Live in HD, Medici TV, the Berlin Philharmonic Digital Concert Hall, Deutsche Grammophon’s STAGE+, live video feeds of competitions, and anything else we equated in the recent past with forward thinking, is going to dissolve into a momentary flash of antiquated attempts at creating consumer content.
Nothing can prepare us for what is coming. The way people have been phone-focused is likely going to morph into an entirely new and ever-evolving form of augmented reality. Whether we can biologically adapt is another question, and if we can morally hang on to any real purpose of consciousness is in doubt.
But I stray…
For the time being, the audio experience du jour is 3D, and one stand out brand of that is Dolby Atmos.
It’s interesting, partly because this particular brand of audio technology real estate has, for the most part, been developed and used for film. Classical music is benefiting from all that theater audio gear that has been set up in so many home environments—so why not use it for listening? And so we are!
But the real story isn’t just about home theaters. It’s about how Apple’s newer AirPods are built for this kind of spatial listening—not to mention the billions of dollars being poured into it, especially at Meta, where they’re planning to wire us all up into a made-up playground of hyper-realistic togetherness. Which brings me back to where this is all really going, as far as AR is concerned.
It’s not about speakers—but rather, it’s about a new nowness: that everywhere is connected and therefore everywhere is now capable. The environment you are in, or can imagine being in, is where we can send you your sound.
And as Dorothy said to her little dog, “Toto, I have a feeling we’re not in Kansas anymore”—we’re about to cross into new territory. It’s not science fiction; it’s just around the corner. And until they perfect olfactory over wireless, audio is leading the way.
Kathy Geisler I’m intrigued. So, just for me and anybody else who doesn’t know, what is Dolby Atmos?
David Bowles Dolby Atmos is a delivery format that permits one to direct sound basically anywhere. In playback, it will adjust to the number of speakers you have—anything from earbuds with motion tracking all the way up to a multi-speaker system in a theater.
Needless to say, this was developed for the movies. Part of the problem in the past was that Dolby would certify theaters, but then their operators would change settings, so the viewing and listening experience was not very optimal.
Not only is the music recorded in a way that can adjust to the number of loudspeakers, but the cinema itself is set up as a giant object, so it truly allows one to fine-tune the theater. If the presets get changed, they can be recalled very quickly, and it also means that theaters with odd speaker setups can sound much better than in previous years.
KG Is it like the theater at Skywalker, where they have something like 28 speakers set up in a spatial environment experience? Would it be able to adjust to that?
DB Very probably, they were one of the test beds for Atmos delivery, and their room was tuned to Dolby specifications as well.
We’ve all heard movies with Atmos, but we haven’t really been aware of it. It is quite complex to learn, but it also means that you have a lot more flexibility in recording. You don’t simply have to use orthodox mic arrays and spot mics—you can put up as many microphones as needed to capture whatever you’re doing. However, you can also mix this down into a traditional stereo recording.
These days, we refer to the mix we want by the number of speakers. So, five dot one (5.1) means the surround sound setup with front left, center, right, a couple of rear speakers, plus a subwoofer.
With 3D audio, we can add a top layer of at least four speakers to that. So, we’ll say that’s five dot one plus four (5.1.4). We can also have side speakers, which is very typical in a movie setup—now you have 7.1.4.
Dolby playback can be for those orthodox speaker arrays or, as I mentioned before, it could be in a theater with a huge number of speakers. Usually, what happens is that behind the screen there are very large line arrays for left, center, and right. The rest of the speakers on the walls and ceiling are smaller.
Right there, you have a big difference in that the front speakers probably have very good low frequency, but then all these smaller speakers don’t. In that case, they would probably be putting more signal in the subwoofers to compensate for that. And as for the subwoofers, instead of having one, you can have two—or even four.
In the older Dolby theater on Brannan Street in San Francisco, there is actually a huge bass bar underneath the whole theater tied into the subwoofers. One really feels the ground shake—it’s quite amazing! Perhaps this setup is a bit of overkill for listening to music.
What I like about Dolby Atmos Music is that it’s now a format a lot of people already have access to through Amazon Music, Apple Music, Tidal, and other streaming sites. It adjusts to everything from earbuds to a multi-speaker system. What’s also intriguing is that, over the years, the home theater market opened the door for 3D listening.
KG How does it compare to something like Lucasfilm’s THX?
DB These are competing systems, and they’ve been around for a long time. A DVD would have soundtracks encoded with both Dolby and THX; later on, Blu-ray included improved versions of these.
In the immersive world, THX Spatial Audio is meant for headphone listening, with an emphasis on gaming. Before Dolby Atmos, there was AURO-3D from Belgium, which was speaker-based and meant for Blu-ray release. Later on, DTS:X, MPEG-H, and Sony 360 were released.
Now it’s more a question for whoever does your mastering and authoring: what is the intended market? All these systems allow your original recordings to remain intact. With Dolby Atmos, you can designate some sounds as objects, which can either be stationary—located between where normal speakers would be—or you can have them move around.
Typically in movies, the sound effects are the ones that are going to be moving around—but I’ve also heard some pop music where some of the music effects move around as well, and that can be quite interesting.
KG How did you get involved with working with Atmos, since it’s mostly a movie thing?
DB This goes back to when I was hired in 2011 to teach at an annual seminar at the NYU Steinhardt Music Technologies Program. The degree program has a track for grad students called Tonmeister. I’m one of three guests who come in and take students through test recordings. The director hires professional musicians or ensembles, and we come in and make different types of 3D recordings using different kinds of microphone arrays.
On the last day that I’m there, we’ll have a listening session to compare and contrast. At the end of that, the students write up their experiences—not only with these different mic arrays, but also reflecting on what they learned from each of us and the various ensembles that we recorded.
The NYU faculty was starting to work in 3D back then—not only on the recording end, but also on how a listener perceives it psychoacoustically.
In 2011, we could only play audio back. Dolby Atmos had not been released officially, and AURO-3D had just appeared on the market. The big difference between AURO-3D and Atmos is that AURO-3D is based on a specific number of speakers—what we call speaker-based. Dolby Atmos is object-based, and will represent itself properly whether in a cinema, a home theater, or earbuds.
Most importantly, I started incorporating 3D into my commercial projects, starting out as listening and learning experiences. I had already found that the surround recordings I did had an impact on my stereo recording techniques, and now that I’m working in 3D, it has an impact on my surround recording techniques. It’s been a huge learning experience.
I created a mic array for capturing specific information from ceilings and high parts of walls. I then wrote a paper on it, which I presented at three AES (Audio Engineering Society) conferences.
I’m continuing to work on arrays that are a little more specific for home theater delivery, where your speakers might be in different places. The point is that whether it’s Atmos, DTS, DTS:X, or AURO-3D, these are delivery formats. We can make these 3D recordings of acoustic music the way we want and then just choose whichever delivery format is prevalent. These days, it’s Atmos Music, simply because Amazon took it on and then Apple Music followed suit. This really opened the door for Atmos Music.
Immersive recordings I’ve done as far back as 10 years ago are starting to be released in Atmos, and it’s really thrilling for me that a lot of different people will be able to experience this type of music!
KG Is Atmos added during the recording process, or is it part of mastering?
DB All of the encoding is done in post-production, and it results in a proprietary streamed audio file. The channels are interleaved together, as well as the object placement. This is in contrast to a speaker-based approach with set channels.
KG Is it done during mastering, or is it a step beyond mastering?
DB It’s a step beyond mastering, using the tools Dolby has provided. There are two things:
The first is called the Atmos Renderer. You can actually see a three-dimensional representation of a room—needless to say, this is a small movie theater. You can see your objects as sort of glowing blobs in the middle of that.
There’s also something called the Album Assembler, a kind of playlist, which importantly allows for some small changes. It’s not a full project at that point—it’s up to the streaming service to put together a playlist. The album assembly gives you a chance to listen through and check that sound levels are within specifications.
For instance, I worked on some 3D projects with one mastering engineer. When it came to the authoring, I took it to somebody else for a test listen. It turned out the ceiling speakers needed a little bit more presence in one piece, and then in another movement, the back speakers were a little too prevalent. So it’s a very flexible format—as opposed to traditional masters: once we generate them, they don’t change.
KG Take a multi-channel recording—how would you create that in Atmos?
DB Well, this is a very good question, and Dolby had to address this toward the beginning, because when everybody was doing their mixing, they were going toward traditional speaker arrays.
Dolby instituted something called speaker beds, which map directly to speakers. So, I could take a recording—a 5.1.4 setup with no changes to it—and have that as an Atmos master, specify those channels as speaker beds, and they would come out exactly as they went in.
KG So does that mean Atmos wouldn’t change or add anything to the original mix, but would instead adapt it for different playback setups—like if there were fewer or different speakers than the ones originally intended?
DB What it means is that if there’s a home theater that doesn’t have a center speaker, then it would map properly. In stereo, we have what’s called a phantom center. I suspect what would happen in that situation is it would make the mix between left, center, and right have more center prominence—so that phantom center would come out a little bit more.
Today’s TVs now have speakers in the back, which use the wall to reflect sound back. What they’re doing is amplifying those signals, because once the sound reflects, it starts losing intensity. Those TVs have front speakers as well, so it sort of gives the illusion of surround sound.
Atmos playback will do something similar. If you get a home theater receiver, they often come with a measurement microphone, and you can actually tune your listening room!
It’s rudimentary—not the kind of precision that you have in the studio—but it does mean that like in the Dolby movie theater, the characteristics of the room are now known. Then when you listen, it’ll come out properly. You might or might not have the same number of speakers, and you certainly won’t have the frequency response of the speakers that the mixing engineer was listening through.
KG Does it autocorrect for where it is, so you don’t have to program it to know what the end delivery is going to be? It just knows by how many speakers there are—or aren’t—in terms of how it should divide itself and place itself out into whatever speaker array there is.
DB Yes, that’s it.
KG So then, we could call it a smart sound delivery system.
So with Atmos, you’ve got more control over where the sound lives in the room—it’s not just about volume or balance anymore. How does that change things for classical recordings? What kinds of choices can you make now that weren’t possible before?
DB Acoustic music—such as classical, jazz, and folk music—is traditionally recorded with all performers in the same space; therefore, the characteristics of that space become part of the recording. Also, there is more dynamic range and variety between solo performers and large orchestras with choruses. Pop music is often recorded in different (and smaller) spaces, with mics placed very close to the performers.
When I’m doing an Atmos mix, I can make decisions about spot mics. I can say, “Okay, I want this as an object.” So it’s always going to come out strongly between these two speakers. Whereas when you’re doing a traditional mix, you sort of put it there and hope for the best.
If it is an optimal studio situation, you’ll get the sense of that spot mic being located between the two speakers. With Atmos, there’s a higher probability that it is going to come out properly.
I’ve heard some recordings played back in very big spaces, and that’s a little bit frightening: you’ll start hearing holes between the speakers. With Atmos, it will scale up to the increased number of speakers in the home theater—and it’ll scale down, too. The Apple AirPods 3rd generation earbuds gave us motion tracking. So, if you turn your head slowly, it’s as if you’re in a concert hall and you’re looking to the left. The performers haven’t moved, except now they’re more in your right ear because you’ve just turned your head to the left.
KG So it compensates for whatever setup you have?
DB Yes. First of all, I like that there’s a way of delivering 3D audio that doesn’t mean having to press a disc. Fewer and fewer clients are willing to spend the cost of Blu-ray authoring and pressing these days.
Though I prefer listening to a Blu-ray—whether for music or a movie—everyone’s streaming these days, which means recordings must be issued in formats that meet the demand. I don’t necessarily want to have to record in a different way simply because I’m going to be releasing in one format or another. Theoretically, I should be able to make these optimal recordings and make the decisions later.
So, when I’m doing the Atmos authoring, I can say, “Designate all my spot mics as objects,” and know that they’ll come out properly.
It’s a challenge because we have to think way beyond just the traditional mix-and-master workflow. Now we have to think about where the releases will go and how people will listen to them. The recordings that I was doing as far back as 2014 and that are now being released—I was thinking ahead at that point, even though I didn’t know what the release format would be.
KG Right. It sounds like the perfect format for recording a Mahler symphony with the offstage bands.
DB Yeah, I think it would be absolutely fantastic to start hearing more music this way. What I want is something where the imaging is very precise. There’s sort of an optimal position for listening anyway—where the microphones are placed. It’s not from the audience’s point of view, it’s not from the conductor’s point of view—it’s something else. But it’s something that’s optimal for the listener down the road.
KG Is there anything about the Doppler effect that comes in with all this moving around of sound?
DB Well, one thing we’ve noticed is that in a normal acoustic, if there’s something like a percussion hit that comes directly from in front of you, you’ll hear it ricochet behind you. Or you can hear things ricochet side to side. So that brings in a whole other question, which has affected audiophile recordings from the beginning—and that’s about picking an appropriate acoustic for a given project.
KG Is this something that audiophiles are adopting, or are there purists saying, “We don’t want that”?
DB Well, funny enough, a lot of the audiophiles are pure stereo people. I mean, they haven’t even gotten into surround. But, you know, frankly, if you’re going to spend $10,000 on cabling and six figures on a pair of stereo speakers, you’re not very inclined to get the surround speakers, the side speakers, and the ceiling speakers.
Home theater enthusiasts are the ones who are driving this market, and I think now that music-only recordings are coming out in 3D, there will be more listeners. It’s going to be sort of a slow buildup, but the listeners that I’ve taken through this are very appreciative.
Not only do today’s flat-screen TVs use wall reflections to mimic delivery from spaced speakers, but there are better and better soundbars on the market. Recently, I was at Tessmar Studios near Hanover, Germany, listening to some of my Atmos mixes through large studio monitors, mid-price small speakers, and an amazing high-end soundbar from Sennheiser. The differences were smaller than I expected; a good soundbar can work really well in a home theater. Just last week, I read about an even higher-end soundbar that is being developed in the UK. In the end, there is life beyond earbuds!
I’ve played 3D recordings for clients using just the main layer speakers (in a 5.1 configuration), but with the height layer directed to those speakers instead of the ceiling speakers. I would then mute the height layer, and the listeners would say, “Wait a minute, something’s missing!”
I would then repeat the playback as before, and the response often was, “Yes, I can hear something coming from above”—even though it’s not! After routing the height layer properly, listeners were even more impressed. (This is a very interesting perceptual issue and needs more research in a controlled environment.)
In conclusion, I do believe that capturing reflections from ceilings and upper walls with specificity is a very important part of 3D recording.
In stereo, there is always a compromise to be made with how far the microphones were apart. If they were too far apart, you would then get the dreaded “hole in the middle”; if the mics were too close together, it sounded too much like a mono recording. If mics were too close to the ensemble, you would be missing the people on the outside, even though you’d be getting very nice presence.
Also, as far as film scoring, I noticed that it tends to have fairly close miking. I was very interested in Hitchcock at one point, and someone had written a book on his music. I wrote to the author, saying I felt that the string section of the orchestra in Psycho was fairly small—maybe about 24 players. And this is a mono recording, mind you. He wrote back and said, “No, this is more than twice as many players, more like 50 or 60.” I thought, well, I’m just not hearing that massive sound.
I think with stereo surround 3D, we can get that presence of the ensemble, but also, with the other mics and the other speakers, we finally can get some space around them as well. So we don’t have to make the compromise of too close versus too distant. That is part of the equation, but it doesn’t have to be the only thing limiting us from making a good recording that has both presence and atmosphere in it.
KG This is only for digital formats. We wouldn’t hear this on, say, an LP or anything that’s a stereo recording format, right?
DB No, unfortunately not. I think the closest people got to any kind of immersion was with binaural LP recordings, which, at that point, were made with something that looked like a human head, with two microphones stuck where the ears should be.
Oddly enough, that’s been coming back with Dolby Atmos. Part of the authoring process is to deliver the Atmos-rendered audio (the one that gives you all the directions of where sounds should go). But there are also binaural files for earbud listening, because Dolby realized that’s part of their audience.
KG Would Atmos integrate with live, speaker-enhanced setups?
DB I think it can. There’s no reason somebody might not want to dial in a little bit of ambience for a movie. Let’s say you’re watching a classic movie with a classic film score—a rom-com, for example. That might have more ambience, whereas if you’re watching something like science fiction, you might not want to add too much, because it could actually detract from the listener experience.
KG And just for clarification, how would this compare to SACD?
DB SACD was developed as a release format for DSD audio (Direct Stream Digital), and at that time it was meant as an archiving format. They discovered that it could be used for recording, too.
The biggest problem is that it’s a one-bit signal, run at a very, very high sample rate. There’s no ability to do anything with plug-ins, processing, adding reverb, EQ, or compression—it just can’t be done. You have to take it out into the analog domain and bring it back in, or you have to convert it to a very high sample rate PCM—the traditional digital format (Pulse-Code Modulation), which is a method for converting analog audio signals into digital audio signals.
However, the SACD format is dual-layer. One layer will play on any CD system and is picked up by a red laser. The other layer below is picked up by a narrower bandwidth blue laser, and has a lot more data on it—you could play, I think, up to 5.1 on DSD.
It was used to remaster a lot of old recordings. For instance, with the RCA Living Stereo SACDs, you can hear certain ones for the first time with three channels, which is how they were recorded and monitored. In the old days, the left and right channels went to the stereo release, and the center channel went to the mono release. And now you can hear them together, and it sounds absolutely amazing!
KG When you add Dolby Atmos, it’s after the fact—it’s after the recording. You conceive of it while you’re doing your mic setup, but it’s not actually added until the end. So, it’s like an adapting component attached onto a recording—and a recording has to have it dialed in at the end, like a candy coating or something.
And it’s not like you would have the ability to use a playback component in a listening system that would infuse it as it goes through. It has to be embedded into the digital material.
DB Yes, unfortunately, that’s the state right now. In the studio, we can listen through the Dolby Atmos renderer in real time. The streams that are going to the specific loudspeakers have to be designated as speaker beds, and the others have to be designated as objects—so yes, it is something that’s done at the very end.
We could also use a traditional 9.1 mix and then re-designate those tracks in an Atmos environment, then compare those to a Dolby-rendered mix. However, the actual encoding is done in non-real time. I think eventually you’ll be able to do everything in real time—as DSP (digital signal processing) grows year by year.
KG Does it have to be used only in a studio environment? It’s actually in a mastering environment, right?
DB Yes, that’s correct.
KG Is it expensive for an artist or label to do a project in Atmos? What’s actually involved in making it happen on the technical side?
DB The Dolby software is Mac-based at the moment; there’s no equivalent for the PC. What PC people have been doing is using two computers: one with a DAW project (digital audio workstation software) to play out audio, and a dedicated Mac to capture the audio in the Dolby Atmos Renderer. So, yes, it’s expensive.
The cost of the software packages from Dolby isn’t unreasonable, but the extra computer is an outlay. The main thing, really, is time: one has to figure out the proper routing in the DAW. For me, there’s a lot of listening that needs to be done, and more than a little bit of tweaking, because you’re now listening through the renderer—maybe something is going to come out very slightly differently.
Finally, it’s good to listen in different rooms, as well as to the rendered files, and ask, “Is there any final tweaking that needs doing on an individual basis?”
KG So, every time you tweak, you have to run it through in real time again, basically—just like in mastering.
DB Well, if there’s something in one track you want to bring one channel up or down for, that can be done very easily without having to run it through the renderer—and that’s a good feature. But if there’s something like an object coming out too much or not quite enough, you have to open up the DAW session again and then rerun it.
It’s no different, really, from listening to any final master. You listen to it with a mastering engineer in their studio. After they deliver it to you, you listen on your own system—or maybe take it somewhere else—and you might hear different issues.
KG So, it’s a second mastering, basically.
DB Pretty much, yes—and you do the same thing if you’re setting it up for Sony 360, DTS:X, or MPEG-H.
KG If you played me a recording you had done prior to Atmos and compared it to a post-Atmos version, would there be differences in the quality—things like sample rate, resolution, or the mastering process?
I’m thinking of how we sometimes compare different D/A and A/D converters (Digital-to-Analog and Analog-to-Digital converters), and how subtle changes in resolution or fidelity can start to reveal themselves. How much of that kind of difference comes down to sample rates, bandwidth, or playback environments?
DB Well, that’s a good question. At the moment, Dolby Atmos runs at a 48k sampling rate—the same as with movie soundtracks. With Blu-ray, 4K video, and 8K video, audio is still at the same 48k sampling rate!
What I didn’t mention is that there is another Atmos format meant specifically to go on a Blu-ray; at the moment, this has too high of a bandwidth for streaming. However, that encoding is of a higher quality.
Finally, there are some people who have done Dolby Atmos Music projects at a 96k sampling rate.
There have always been different levels of mastering for movies: high dynamic range for theaters, less so for a Blu-ray release, and even less so for streaming. Sadly, home playback systems just aren’t as good as the best theaters.
KG So is there an audible, marked difference?
DB For me, one difference is going from a high sample rate down to 48k. If you’re encoding a lot of objects—such as in a pop piece or film score—then you might start hearing things. And that’s simply because you’re throwing in a lot of objects to be encoded and decoded. I think Atmos is limited to 128 objects at present.
KG So then, the differences are spatial more than anything else. It’s not so much the quality of the sound as it is where you are in the sound—that’s the 3D.
DB I think it’s like the difference between CD audio and an MP3 at 320 kbps—the highest rate an MP3 can have. This format still loses data. Some people cannot tell the difference, but I can say this: it’s not quite as good. It sounds a little bit more grainy in places.
If I bring it into a project at, let’s say, DXD (Digital eXtreme Definition)—which is pretty much the highest PCM sample rate—I’ll certainly hear a difference when I bring it down to CD audio at 16-bit.
Fortunately, Atmos remains at 24-bit. And I think it’s a similar situation with Dolby Atmos overall: because you’re working with encoding, there are compromises. But what makes up for it is the immersive experience that the listener gets.
KG So you think maybe most people would not mind trading the high-def sound for the experiential part?
DB My hope is that we aim toward 10-gigabit Ethernet and fiber delivery in our neighborhoods. Then we would be able to get higher-quality streaming. For me, this would mean possibly going back and redoing some projects—except this time at 96k, or possibly 192k.
KG Right, but Dolby would have to open it up to that first.
DB There are a few things being done in Atmos at 96k, but that’s not meant for streaming—that’s meant for the disc format.
KG Are all the streaming services able to deliver Dolby Atmos?
DB For films and TV, yes. What I’ve noticed on both Netflix and Amazon Prime is that, at the beginning of a program, it’ll say Dolby Atmos. Also, when you’re watching previews, it’ll indicate when something is in 4K video.
KG So, Atmos is on Apple Music, Amazon, and Tidal—but not Spotify.
DB Spotify uses a lot of compression, and that I can certainly hear. It’s great if you want to do some casual listening, but, you know, it isn’t really good for critical listening.
KG What is the correct terminology? It’s not mastered with Dolby Atmos…
DB Authored.
KG I’m going to have to start looking for that symbol. Dolby famously made their money—created their brand, as it were—by forcing people to license their trademark logo if they used Dolby Audio at any point in the recording process. Releases were required to include that trademark on their recordings and pay for the usage. It’s also how they got around people using the technology for free. Dolby is a textbook case for all intellectual property.
When can we hear the recordings that you’ve been doing with Atmos?
DB On the Avie label, there are the Bach St John Passion and the Mass in B Minor (with Easter Oratorio and Magnificat scheduled for release in 2026); also Uncharted with Aryeh Nussbaum Cohen and John Churchwell. On the Delos label, there are three recordings with the chorus Cappella SF, and recently a disc of Romantic Sonatas with trumpeter Andrew Balio and pianist John Wilson. On the Navona Records label, Charles Metz performs 12 Scarlatti Sonatas using not only Muzio Clementi’s published edition but also one of his square pianos (dated 1806)!
KG It’s been a fantastic journey exploring this topic with you! I think it really is an eye-opener—showing how 3D works. I mean really, it’s required reading!
DB Thank you so much for having me and allowing me to talk not only about 3D audio, but also about the process we go through to bring these recordings to market. M
Leave a Comment (Comments are Moderated)
You must be logged in to post a comment.