Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The measure of randomness chosen in this particular paper appears to be an approximation of the Kolmogorov-Chaitin complexity adapted for small integer/binary sequences.

This effectively looks at how easy it would theoretically be to compress/describe the data, for instance HHHHHHHHHH would be low complexity as it could be encoded as '10 H's'.

If something is truly random, it shouldn't be possible to encode it due to the pigeon-hole principle.

See http://www.scholarpedia.org/article/Algorithmic_complexity



> If something is truly random, it shouldn't be possible to encode it due to the pigeon-hole principle.

This statement is obviously untrue.

“Random numbers” don’t really exists. The original authors were right about that. Every number/sequence is equally likely to occur. There’s even an XKCD about this [0].

I guess what you mean is: If you have a process that generates sequences randomly, most of those sequences are expected to compress badly.

[0]: https://xkcd.com/221/


Yes, you are right on this one - I was 100% wrong and your corrected statement is right




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: