Hi, and welcome back. Apologies for the slightly click-baity title. But
yes, there really is a secret of maximum loudness, and it works regardless of the genre, the medium
or the platform. Obviously I'm not going to give it up yet, you'll have to keep watching, because
we need some background to properly understand it. I remember when I first got my first little
stereo system as a child, thinking that the volume control set an absolute loudness for playback.
But later on, when I started using my dad's hi fi to copy vinyl records onto cassette, I discovered
that the record level could make a big difference: set it too low and my copy would be
really quiet and buried in noise; but set it too high and I'd
get horrible distortion. With some material I noticed that pushing the
levels a little into the red actually made it sound better… but that's not the subject of
todays video. The point is, while you had a little leeway to choose how hot you printed to tape, the
nature of the medium dictated that most material sat around the sweet spot where you get the best
signal to noise ratio with the least distortion. When digital audio was invented, it was originally
assumed that people would continue to work the same way: the nominal unity level of analogue
recording would be equivalent to -18 or -20 dBFS, with the extra dynamic
range reserved for headroom. But of course with digital audio, that headroom
is just as clean and linear as the rest of its dynamic range. There's no distortion penalty
for pushing the levels right up to full scale, you just get a louder master.
So that's what people did. And then people discovered that, actually you
could push the levels a little bit further and deliberately clip the converters, and
that this clipping would be inaudible if it were brief enough, or might
even sound subjectively good. And then the first digital brick wall limiters
came along, and the loudness war really got going in earnest. Now you could push those levels
harder… and harder… and it wouldn't clip, it would just get louder… and suddenly everyone
was trying to be louder than everyone else. Let's pause here to discuss
the elephant in the room: yes, louder is better. At least until you
approach the pain threshold anyway. Below that, every dB of extra loudness will
translate to perceived extra clarity, extra solid bass, more detail, more
depth, more width. Whatever emotional impact the music has at a low level, it
will have more of it when turned up louder. But real loudness is not the
same as artificial loudness. Real loudness takes the whole waveform and scales
it up. Assuming you're not damaging your hearing, this is a good thing. Real loudness
is why a live gig can be so exciting: the natural, uncompressed dynamics of a
drum kit, augmented by a good PA system, provides a visceral experience that you'll never
recreate with a hi fi system and a studio album. Artificial loudness, as created
by a brick wall limiter, is not the same. Increasing the average
levels makes the audio seem louder, but the peaks are now smashed down into the
body of the mix, and this has consequences. Not surprisingly this can reduce
the impact of your transients, making them less punchy or snappy. But there
are also more subtle side effects, such as a loss of depth and space: while real loudness
can fill a room and immerse you in the music, artificial loudness forms an oppressive
wall that pushes you back in your seat. Perhaps most egregiously, reducing the peak to
average ratio can suck the life out of a mix, making it flatter, more boring, less
exciting… if it's an upbeat energetic rock song this is perhaps the worst thing
you can do: the listener won't know why, but the music simply won't be as exciting as
a classic rock mix with the dynamics intact. Of course, at this point I'm obliged to point
out that we've come a long way since the first brick wall limiters were released, and limiters
have become much more sophisticated and powerful. Pro-L2 with its Modern limiting
style can smash a mix really hard, yet still retain a remarkable
sense of punch and definition. But no amount of clever program dependency
can change the fact that you're reducing the relative size of the peaks, and filling
in the gaps between them. A clever person once said that music is the space between
the notes… removing or reducing the space between your transients risks sucking
some of the musicality out of your mix. However: when used carefully, a limiter can make
the audio significantly louder with no negative impacts at all. And usually, significantly
louder still with only very minor side effects. So this leaves us with a dilemma: if we don't use
any limiting at all, our mixes will play back much quieter than the majority of other releases.
Of course your listener could just turn it up… hold that thought because we'll come back
to it… but you'll worry, quite reasonably, that they won't, and your mix will just
end up sounding weaker than its peers. But on the other hand, competing with the
loudness levels of many modern releases is also going to make your mix sound
weaker than it potentially could. So, what's the optimum loudness to aim for when
mastering? Or to put it another way, at what point is the benefit of the extra loudness outweighed by
the disadvantage of a low peak to average ratio? Well, we need some way to quantify
loudness in order to answer that question, so I'm going to have to talk about metering;
how we measure the loudness of a signal, which is more complicated than
you might initially expect. Let's start with the channel meters in a
DAW. These are quite simple: they flip the negative half of the cycle to positive, then
display the sample values on a decibel scale. Of course the signal is continually oscillating
through zero, and the meter would be unreadable if it showed that, so the values shown decay slowly
to smooth out the dips, and display this sine wave as a constant level. Hence, if the sine wave stops
suddenly… the meter nevertheless decays gradually. This is perfect when you're tracking vocals and
you need to know how close you are to clipping. But it doesn't cut the mustard for
mastering, for a couple of reasons. The first problem is that this meter just reads
maximum sample values. But when this signal is converted to analogue the original curvy
analogue signal will be perfectly recreated, and its normal for the peaks of the
analogue signal to be in between samples, and higher in level than the
sample values either side. Measuring the actual peaks of the waveform
requires a more sophisticated true peak meter, that calculates values in between
samples. More on this later. Perhaps more significantly however, peak
metering… and also true peak metering… tells us almost nothing about
how loud the signal will sound. This is partly due to the nature of human hearing.
Its not flat in frequency response, so a sine wave at 2k… will seem much louder than a sine wave at
200Hz… even though both have the same peak levels. Also, a long burst of 2KHz… will seem much
louder and more obnoxious than a very short blip… even though both are again at the same peak level. A more real world example would be
drums and distorted electric guitar: if I normalise these to the same peak level,
the guitar completely obliterates the drums… the drums need to peak at a significantly
higher level than the guitar for the two to sound balanced, because of the short
spiky transients in the drum track. Finally, peak levels don't really mean much
anyway! The easiest way to demonstrate this is with a sawtooth wave: here's how that looks
on Pro-Q3's analyser, with a low fundamental plus a harmonic series above it. Each of these
harmonics is essentially a separate sine wave. Now I'll add a high pass filter: I’ll start with
it set way too low to have any audible effect on the sawtooth wave… but notice, the peak
level is already reading 2dB higher than it was. Now lets start to wind up the filter cutoff,
until it starts to shave away the level of that lowest partial… I’m sure you’ll agree that this
doesn’t sound louder than the unfiltered version… and yet the peak level of the filtered version
is a full 6dB higher than the unfiltered version. If you've ever wondered why an EQ cut
or subtractive filter can make your mix apparently get louder, this is what's
actually going on: phase shift has caused all the individual partials to add up
differently, and the peaks of the wave have got higher as a result. But it's
not actually louder in any real sense. So measuring the peak levels is pretty much
useless when trying to determine how loud something will sound. That needs a different
approach, and there have been a few over the years: the classic VU meter relies on the
physical ballistics and inertia of the needle to smooth out the peaks, and display
more of an average level instead. This is much better than a simple peak meter. But
you need a little practise to interpret what a VU meter is telling you, especially when it comes
to signals with lots of low frequency content, which tend to pin the meters even
at perfectly sensible levels. Another approach is to measure
RMS levels, or Root Mean Squared: without delving too deeply into the maths
involved, this averages out the levels over time; “mean” being simply a more
mathy way of saying average. This solves the problem of phase shift:
inaudible changes in phase will not cause changes in RMS levels, and our sawtooth wave
doesn't read higher when I high pass filter it. It also potentially solves the problem of
duration, as we’re averaging the signal levels over time. But the question is, how much time? If
you average levels over a short 50ms window you’ll get a much faster moving, bouncier reading than
if you average over 500ms… or 2 whole seconds… With longer, larger windows RMS levels correlate
quite well with perceived loudness. But we still have the same problem as VU meters with bass
heavy signals that tend to read too high. So some meters use weighting to try to emulate
the non flat frequency response of human hearing. Basically this means a filter
to boost the upper midrange and cut the low and high extremes, so the
meter responds more or less respectively. This is a standard A weighting curve as used
for measuring environmental noise levels… I told you measuring loudness was
more complicated than it seems! But we do have a better option these
days, thanks to the broadcast industry which has developed a new
standard way to measure loudness, with a new unit of measurement known
as Loudness Units or LU for short. This new scale is based on RMS style averaging,
but includes frequency weighting to avoid the problem of over reading bass heavy signals,
and also defines the averaging window used. In fact it defines three different ones,
called Momentary, Short Term and Integrated. Momentary averages signals over 400ms, and gives us this relatively bouncy
display labelled M in Pro-L2. Short Term uses a six second
window, and provides a much steadier reading that doesn't move
much in response to drum transients. This is the most useful measure of "how loud
is the signal at this point in the song". The integrated reading is a little
different: this averages out the levels over the whole song… or the whole
album, or whatever you want to measure. In Pro-L2 you can click to reset before the
start of the song… and by the end of the song, or the end of the album, you'll have a single
integrated loudness value for the whole thing. Your DAW or audio editor might also
allow you to analyse clips offline to calculate an integrated
loudness for the whole clip… This might seem like a perfect way to quantify
the loudness of a song, but it does have one flaw which I can illustrate with Led Zeppelin: here's
Immigrant Song… I won't play it to avoid copyright strikes, but I'm sure you know how it sounds: it
kicks in hard, and rocks hard all the way through. Now compare this with Stairway to Heaven: this
starts off famously quietly, and builds gradually all the way through. As a result this mix has a
lower integrated loudness than Immigrant Song, even though the climax at
the end is about the same. So actually the best way to derive
a single loudness value for a song might be to isolate the loudest section, typically the last chorus, and measure the
integrated loudness of just that section. Failing that, watching the Short Term reading
during that section will do just as well. Note that Loudness Units correspond to decibels: if a mix has a loudness of -16 LUFS,
and then you turn it up by 3dB, it will now have a loudness of -13 LUFS,
which helps to keep things nice and simple. So, now we have a way to measure
loudness, you doubtless want some numbers: what loudness readings should you be aiming
for? Should you even have a target level? And does it need to be different
for different platforms or formats? Well, we'll get to that in part two, along with the secret in the title
of course. Thanks for watching.
Hello! Thanks for posting on /r/WeAreTheMusicMakers. This comment was sent automatically.
This post has been sent to our moderators for approval. If your post is not approved, you will likely receive a message indicating why it isn't appropriate. Your post will not be approved if it's asking for tech support, product/service recommendations, sharing your work, or if it's not about the actual process of making music. Here are some reasons your post may not be approved. Please click here to read the full subreddit rules. Be sure to use the weekly threads for promotion, feedback, and product recommendations!
I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.
Why does fabfilter always have the shittiest most amateur bullshit background mix. It sounds so cheap and aimless. There are so many unrecognized producers on SoundCloud that have amazing tracks they could use but they wanna choose this random assortment of nonsense.
The viewer would be more inclines to purchase their products if the music they were applying their product to actually sounded listenable.