Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Everything old is new again, so maybe you're just in time. Consider we have NVMe drives which can read and write 4-5 GB/s[1], that whole equation changes again...

[1]: https://www.anandtech.com/bench/SSD21/3017



zstd at compression level 1 can do ~2GB/s per core, and as time goes on and processors get more and more cores, compressing data by default is a valid proposition.

In fact, if you install Fedora 35 on btrfs, zstd:1 is enabled by default, using fs-level heuristics to decide when and when not to compress, reducing write amplification on SSD drives and gaining some space for free with negligible performance impact, which is nice.

My 8GB ~/src directory on encrypted btrfs on NVMe uses 6GB on disk and I can easily saturate the link while reading from it. Computers are plenty fast.


> zstd at compression level 1 can do ~2GB/s per core

So unless you can multithread that workload, it's already behind by a factor of 2.

> Computers are plenty fast.

My point was you can no longer assume the disk is significantly slower, at least for streaming workloads. You can often still win by spending CPU cycles doing clever stuff, but it's not several orders of magnitude difference like it used to be.


Most zstd implementations are multithreaded for single compression and decompression tasks. My home server has 24 cores and is using 16x PCIe 3.0 for storage (16 GB/s). Through benchmarking I found that I am storage bound from zstd-3+ and compute bound from zstd-2 and zstd-1. I run zstd-3.


Dumb question, but shouldn't it be the opposite? The lower the compression level, the faster the CPU can process the data, making storage the bottleneck. Higher compression level -> CPU becomes the bottleneck.


I believe in the TrueNAS interface I use these are negative compression levels. I could be wrong. It was just some empirical observations. I may flesh out my matrix more to paint a better picture.


There exist network connections at 80Kb/s. Still data repositories.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: