Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

More than 21-bits is meaningless. It's all hype beyond 24-bits.



I believe that's the full dynamic range that human hearing can possibly process where it's a really tiny signal that a human can actually hear with noise underneath vs a really loud signal that is basically pain. Most humans don't have that range. Note that the issue is that the quiet signal needs to be above the noise--so whatever your signal is, the noise floor needs to be below the threshold of hearing given that signal (I believe that while for "normal" signals that noise floor needs to be more than -50 to -60dbm down for very quiet signals threshold of detection is only -20dbm further down).

The trick is that our hearing systems are logarithmic (we can't hear a quiet sound next to a loud sound--that's what compression relies on), so they map to floating point numbers better (ie. 16-bit floating point is way more than enough).

24-bits is effectively for recording engineers so they have lots of headroom and don't have to worry about clipping basically at all (6dbm per bit implies about 18dbm of extra headroom which is a LOT).

However, when you calculate non-linear audio effects, you want extra bit depth (generally floating point) because cancellation and multiplication in your intermediate results can really move your noise floor up into bits that humans can actually hear.


While I can't argue for 21 specifically, I definitely don't trust everything to use careful dithering and guarantee full quality in 16 bits. So in practice that's 24 bits at forty-something kilohertz.


You might be right, however 16-bit sounds really harsh to my ears, and 24-bits is the only widely used standard, better than 16-bit.


Do you mean “the expression ‘16-bit’ sounds harsh to my ears”, or do you mean that you can hear the difference between 16 and 24 bits per sample?

The effect of bit depth has little to do with how you perceive the sound; what adding more bits does is allowing for more dynamic range, i.e. more difference between the loudest possible and the quietest possible sound. More bits brings down the noise floor. This means that for example the final part of a fade-out retains more detail at 24 bits than at 16, but this difference is not something that you would be able to observe in normal listening conditions.

If you like to learn more about the effects of bit depth, I would recommend “Digital Show & Tell” by Xiph Mont at https://www.xiph.org/video/.


Is there any difference between those two expressions. Overall - yes you are right, 24 do sound better. Loss of details and replacement of them with digital (aggressive, non-random, correlated) noise indeed sounds harsh.


> 16-bit sounds really harsh to my ears, and 24-bits is the only widely used standard, better than 16-bit.

That really doesn't make any sense. The bit depth provides for a dynamic range, meaning the difference between the loudest and quietest sounds which can be encoded. 16 bits is enough to go from "mosquito in the room" to "jackhammer right in your ear". Congratulation, 24 bits let you go up to "head in the output nozzle of a rocket taking off" with room to spare, that's… not very useful?

Now what might make sense — aside from plain placebo — is a difference in mastering. For instance lots of SACD comparisons at the time were really comparing differences in mastering, with the SACD converted to regular CDDA turning out way superior to the CD version because the mastering of the CD was so much worse.

The "Loudness Wars" is an especially bad period of horrible mastering, and it went from the mid 90s to the early-mid 2010s (which doesn't mean that regular-CD has gone back to "super awesome", just that you're unlikely to have clipping throughout a piece these days).


What I said actually does make sense. First of all, if you are digitally lowering loudness of audio (say 4 times), you actually are losing precision, and if you later amplify again - you will never return these bits back. This is what is called headroom. So your typical multiply-by-a-floating-point volume control actually kills dynamic range of the sound. I for example never run my OS volume control and players volume knob at 100% (which would preserve the range), because the gain of my amp is simply to high, and even slightest movement of the amp knob will cause dramatic change in loudness. Therefore, I keep the digital volume controls at 25% (losing 2 bits on the way, but the audio is recorded at 16-bit - losing nothing), and then amplify with my amp. Voila - nothing lost in the process. Secondly, empirically, every time I switch sound cards to 24 bits it sounds better. I have noticeably less fatigue. Of course, someone may want to gaslight me (not deliberately, of course), attempting to force me to think it is a placebo, but I tried with many people, and all of them noted the difference.


So what you're saying "actually does make sense" so long as it's a completely different subject than what one would normally assume in context, without you having mentioned such.

When people talk about 24 bits (and >48kHz) in the context of "audiophilia", it's generally about the data at rest and "HD audio" (aka 24 bit music files and downloads). Not about the bit depth of the processing pipeline for which it's generally acknowledged that yes, >16 bit depth does make sense for the audio processing pipeline (as well as the original recording).


Extra bits won't hurt anyway. DAC are not ideal, and there is a comment above about the dynamic range. If your recording is not loud, you lose your dynamic range (say, if your loudest sound is only 40% of the full amplitude, you've already lost more than 1 bit), which can be partially recovered by a higher precision DACs. So it is true in both senses.


> Extra bits won't hurt anyway.

Nobody said it would hurt so I’m not sure why you’re pointing out the consensus like it’s some sort of profound statement.

> If your recording is not loud, you lose your dynamic range

If your sound engineer is wasting your dynamic range, maybe get a better sound engineer? And if they manage to fuck up something at the core of their job, there’s no reason they wouldn’t fuck up just as much with 24 bits to waste.

> So it is true in both senses.

In no meaning of “true” and “both” in common use.


> but I tried with many people, and all of them noted the difference.

Unless this was a double-blind study and the audio levels were exactly the same between runs, this is useless data. Even a 0.1dbSPL difference between runs is noticeable (people gravitate to louder sounds as better).

> every time I switch sound cards to 24 bits

This may be related to the sound card. I use an external DAC, not a soundcard, as most soundcards that come with computers are not up to par.

Changing 16 bits to 24 bits should not change the audio in a way that is discernible to the human ear.


I have actually never seen any proof double-blind study is the best way to do the audio comparison. I mean, yeah placebo effect does exist, but knowing what to look for in certain type equipment makes it a lot easier to find the phenomenon. Double-blind study, IMO has to be applied only after extensive amount of non-blind tests, Yes the final verdict has to be produced after the blind test, but people need to know what to look for. In any case, I was referring to long term listening fatigue, which has very little relation to loudness, and I'd argue the louder sounds should make you tired quicker.

> This may be related to the sound card. I use an external DAC, not a soundcard, as most soundcards that come with computers are not up to par.

For simplicity, I did not talk about them separately, BTW, following your logic there is no point in bying DAC, unless there was a double-blind study, comparing these DACs to cheaper sound-cards. Both are 16-bit/48000, are not they?

> Changing 16 bits to 24 bits should not change the audio in a way that is discernible to the human ear.

This a bold statement, which begs a proof itself.


> This a bold statement, which begs a proof itself.

Only if one doesn't understand what those bits mean or what they correspond to.

These bits are important for quantization, which is the process of converting analog sound into digital numbers. On a graph, X = time and Y = amplitude. The higher the bits, the higher the resolution.

A 16bit recording has 2^16 steps (discrete values) available for amplitude (65,536) and a 24bit recording is 2^24 or 16,777,216 steps.

So why is this important? Well, a 24-bit recording can more finely record differences in amplitude. Given that 1bit = 6dB: a regular 16-bit recording already has a dynamic range of 96dB. A 24-bit recording has a dynamic range of >144db. At ~125-130dB SPL is where hearing loss (permanent) begins.

You do not hear the difference because if you were listening to a 24-bit recording on a 24-bit capable system at sound levels loud enough to actually discern a difference, you would have permanently damaged your ears. Actually, I believe that applies to 20-bit, let alone 24-bit.

So why do 24-bit or higher recordings even exist? They are useful for people mixing and working with the raw audio, before it gets processed down to 16bit audio for distribution. At 24-bit resolution you have a larger amount of headroom before you start clipping, so it's easier to work with considering you have X amount of bits that are just part of the noise floor.

This is also assuming your input files are actually 24-bit to begin with. The vast majority of files are 16-bit because there is literally no point as a consumer to have larger file sizes for no humanly audible benefit.

44.1kHz 16-bit files are all that you need as a human consumer of audio. 48kHz has to do with video and is not better than 44.1kHz because you (a human) cannot hear the difference. 44.1kHz is 22.5kHz x 2. Humans hear sound from 20hz to 20kHz -at best-. This is assuming perfect hearing with no degradation. We sample at 44.1kHz due to the Nyquist-Shannon sampling theorem, and 22kHz gives us just a bit of headroom to apply filters to avoid aliasing. [2]

So I reiterate my initial assumption: flicking a switch to change from 16bit to 24bit should not magically change the quality of audio (in a humanly discernible manner). Assuming the file being played is 24bit lossless audio in the first place.

> BTW, following your logic there is no point in bying DAC

We're talking about dedicated external equipment vs an onboard soundcard+amp which are generally neglected. Not -all- onboard cards suck of course, the Realtek ALC1220 chip on my mobo seems to be comparable or better than entry level DACs from the specs I'm seeing. This is assuming no interference is happening, which is more likely to happen around unshielded electrical components. If you don't believe this is a thing, ask why the audio industry uses thick XLR [shielded AND grounded] cables as standard.

Certain headphones require equipment that can drive them properly, whether it's an onboard soundcard+amp or a DAC+amp. For example, my sennheiser hd600s are 300Ω but some models go up to 600Ω. And yes, the quality of the amp/preamp does make a huge difference.

If one can prove that a component is unable to drive a component, or is sub-par mathematically, one doesn't exactly need double abx trials. Those are for tests like "Monster says their $200 cable is better than <X> standard cable?", or "Is a McIntosh amp better than a $<amount> competitor?".

I don't need to do a double ABX study to realize that beats headphones are drastically worse in performance than sennheiser hd600s: [3], [4], [5]

[0]: https://www.mojo-audio.com/blog/the-24bit-delusion/

[1]: https://web.archive.org/web/20200202124704/https://people.xi...

[2]: https://en.wikipedia.org/wiki/44,100_Hz#Origin

[3]: https://reference-audio-analyzer.pro/en/report/hp/monster-be...

[4]: https://reference-audio-analyzer.pro/en/report/hp/sennheiser...

[5]: https://reference-audio-analyzer.pro/en/report/hp/audio-tech...


>16 bit enough

It is so believed (although there's a lack of supporting evidence, and knowledge that human hearing has excellent dynamic range), but only as long as the mastering work was well done. 24bit allows for much less destructive human error and is very welcome. Much more so than absurdly high sample rates (96KHz, reproducing sounds up to 48KHz as per Niquist), which are of dubious value.

>my sennheiser hd600s are 300Ω

At some frequencies. At some others, it's more like 600Ω. Impedance is seldom stable across the frequency range in headphones.

Amplifier design should account for this and still provide enough power[0].

Output impedance of headphone jacks should be low enough (1:10 is commonly cited, which means <2Ω in practice as 20-30Ω headphones are very common) relative to the low end of the headphone impedance range, in order to prevent the impairment of frequency response.

>Not -all- onboard cards suck of course

But most do. The design of audio circuitry in motherboards doesn't get that much attention. None of my motherboards have good sound. Flaws vary. Some are lowpassed (greedy anti-aliasing filter). Some are noisy. Most have excessive output impedance (typically more than 6Ω, and at times higher than 15Ω). None can output enough power[0] for hd600 (my favourite pair).

[0]: https://nwavguy.blogspot.com/2011/09/more-power.html


> That really doesn't make any sense. The bit depth provides for a dynamic range ... 16 bits is enough to go from "mosquito in the room" to "jackhammer right in your ear".

Dynamic range is not loudest sound / quitest sound ratio (as would one expect), but loudest sound / noise level ratio. Otherwise you would need to count additional bits to encode quietest sound with low enough quantization noise.

Threshold of hearing could be as low as -9 dB SPL, so one wound want noise level below that. Therefore with 96 dB dynamic range from 16 bits the loudest representable sound would be say 86 dB SPL. But symphonic orchestra music may have peaks way above 100 dB.


I think the bigger issue is likely to be a trash computer mic, a trash preamp/adc, trash dac, trash speakers, trash room. I don't care if at some point you're sampling and sending that signal at 1000-bit or whatever, it's still trash, just very accurately sampled trash.


I disagree, I do not own trash equipment. Every time I install Linux, i switch Pulseaudio settings from 16-bit to 24-bit; the difference is immediate, although subtle. Everyone I know who tried to do this, noted that listening fatigue is a lot lower with the new settings.


In my direct experience, everyone who claims this to me, so far, is unable to distinguish 16 bit and 24 bit recordings in an ABX.

The audiophile world would do well to adopt the concept of double-blind study.


I am talking about listening fatigue, first of all, which is a long-term effect. Second, I think double-blind test are worthless, if they are done in isolation; first, you need to run non-blind tests, let people play with audio equipment as much as they want, in any combination, completely open; only after that, when people have figured out what to look for, run the double blind test. Forcing unprepared people to go through very subtle test surely won't give useful results.


If you can’t distinguish A from B reliably, none of the rest matters at all. The idea that you have to “figure out what to look for” is nonsense if you cannot distinguish the two reliably.

“Listening fatigue” when you know which is which is simply placebo.


You should probably read more attentively. I did not say you do not need double blind test.I said, once you learn what to look for, only that there is a point to do the blind test.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: