Movies (and even most video games) make me so angry with that kind of stuff. You want an artificially tailored experience that only works with a zillion-dollar sound system? Fine, you can make it an optional soundtrack that only kicks in with those systems. But the default audio mix needs to be intelligible even on my phone’s speakers.
Video games are annoying because often you can’t hear anything over the explosions music during the opening cutscenes, but at least you CAN fix it in the settings. Movies, yeesh, you have to rely on your TV’s crap postprocessing.
The technology for this has existed for 20+ years and is actually fairly common. It’s often referred to as dynamic range compression. I think the chief complaint here is that it needs to be more accessible. Pre-applying it would mess up too many use cases.
Audio compression is much older than 20 years! Though you’re probably right about it becoming available on consumer A/V devices more recently.
And you’re definitely correct that “pre-applying” compression and generally overdoing it will fuck up the sound for too many people.
The dynamic ranges that are possible (and arguably desirable) to achieve in a movie theater are much greater than what one could (or would even want to) achieve from some crappy TV speakers or cheap ear buds.
From what I understand, mastering for film is going to aim for the greatest dynamic range possible, because it’s always theoretically possible to narrow the range after the fact but not really vice-versa.
I think the direction to go with OP’s suggested regulation would be to require all consumer TV sets and home theater boxes to have a built-in compressor that can be accessed and adjusted by the user. This would probably entail allowing the user to blow their speakers if they set it incorrectly, but in careful hands, it could solve OP’s problem.
That said, my limited experience in this world is exclusive to mixing and mastering music and not film, so grain of salt and all that.
I thought it would be simple: just make the mono/stereo/etc mixes easier to understand, and leave the advanced stuff to people with a million speakers.
I would bet there is one mix created in surround sound (7.1 or Dolby Atmos or whatever), and then the end-user hardware does the down-mixing part, i.e. from Atmos with ~20 speakers to a pair of airpods.
In the music world, we usually make stereo mixes. Even though the software that I use has a button to downmix the stereo output to mono, I only print stereo files.
It’s defintely good practice to listen to the mix in mono for technical reasons and also because you just never know who’s going to be listening on what device—the ultimate goal being to make it sound as good as possible in as many listening environments as possible. Ironically, switching the output to mono is a great way to check for balance between instruments (including the vocals) in a stereo mix.
At any rate, I think the problem of dynamics control—and for that matter, equalization—for fine-tuning the listening experience at home is going to vary wildly from place to place and setup to setup. Therefore the hypothetical regulations should help consumers help themselves by requiring compression and eq controls on consumer devices!
Side tip: if your tv or home theater box has an equalizer, try cutting around 200-250hz and bring the overall volume up a tad to reduce the muddiness of vocals/dialogue. You could also try boosting around 2khz, but as a sound engineer primarily dealing with live performances, I tend to cut more often than I boost.
Movies (and even most video games) make me so angry with that kind of stuff. You want an artificially tailored experience that only works with a zillion-dollar sound system? Fine, you can make it an optional soundtrack that only kicks in with those systems. But the default audio mix needs to be intelligible even on my phone’s speakers.
Video games are annoying because often you can’t hear anything over the explosions music during the opening cutscenes, but at least you CAN fix it in the settings. Movies, yeesh, you have to rely on your TV’s crap postprocessing.
At least game cutscenes tend to be less mumbly. Even IF the volume of things is all over the place.
TV and Movies? Fuck me, it’s like actors all forgot how to talk and instead just mumble every line.
The technology for this has existed for 20+ years and is actually fairly common. It’s often referred to as dynamic range compression. I think the chief complaint here is that it needs to be more accessible. Pre-applying it would mess up too many use cases.
Audio compression is much older than 20 years! Though you’re probably right about it becoming available on consumer A/V devices more recently.
And you’re definitely correct that “pre-applying” compression and generally overdoing it will fuck up the sound for too many people.
The dynamic ranges that are possible (and arguably desirable) to achieve in a movie theater are much greater than what one could (or would even want to) achieve from some crappy TV speakers or cheap ear buds.
From what I understand, mastering for film is going to aim for the greatest dynamic range possible, because it’s always theoretically possible to narrow the range after the fact but not really vice-versa.
I think the direction to go with OP’s suggested regulation would be to require all consumer TV sets and home theater boxes to have a built-in compressor that can be accessed and adjusted by the user. This would probably entail allowing the user to blow their speakers if they set it incorrectly, but in careful hands, it could solve OP’s problem.
That said, my limited experience in this world is exclusive to mixing and mastering music and not film, so grain of salt and all that.
I thought it would be simple: just make the mono/stereo/etc mixes easier to understand, and leave the advanced stuff to people with a million speakers.
I guess that’s too simple?
I would bet there is one mix created in surround sound (7.1 or Dolby Atmos or whatever), and then the end-user hardware does the down-mixing part, i.e. from Atmos with ~20 speakers to a pair of airpods.
In the music world, we usually make stereo mixes. Even though the software that I use has a button to downmix the stereo output to mono, I only print stereo files.
It’s defintely good practice to listen to the mix in mono for technical reasons and also because you just never know who’s going to be listening on what device—the ultimate goal being to make it sound as good as possible in as many listening environments as possible. Ironically, switching the output to mono is a great way to check for balance between instruments (including the vocals) in a stereo mix.
At any rate, I think the problem of dynamics control—and for that matter, equalization—for fine-tuning the listening experience at home is going to vary wildly from place to place and setup to setup. Therefore the hypothetical regulations should help consumers help themselves by requiring compression and eq controls on consumer devices!
Side tip: if your tv or home theater box has an equalizer, try cutting around 200-250hz and bring the overall volume up a tad to reduce the muddiness of vocals/dialogue. You could also try boosting around 2khz, but as a sound engineer primarily dealing with live performances, I tend to cut more often than I boost.