> Every professional security researcher reading this just raised an eyebrow and thought to themselves, "That's a vulnerable application."
I think the reason why every seasoned security researcher I've met also happens to be a heavy drinker or is damaged in some way is because they're in a war of attrition. You want perfect security but it will never happen. These are ultimately mutating, register-based machines. We will probably never be certain that with a sufficient level abstraction a program can be written which will never execute an invalid instruction or be manipulated to reveal hidden information.
Where the theory hits the road is where the action happens.
Which is how we end up with this wide spectrum of acceptable tolerances to security. Holistic verification of systems is extremely costly but necessary where human lives matter. However if someone finds a weird side-channel attack in a bitmap parsing library I think we can be more forgiving.
The whole idea that C programs are insecure by default and can never be secure is where theory wants to ignore the harsh realities. We can write languages with tighter constraints on the verification of the programs they create which will lower the risk of most security exploits by huge margins... but we have to trade something away for the benefit. The immediate costs being run-time performance or qualitative things like maintainability.
What I ultimately think will make these poor security researchers feel better is liability. Having a real system and standard in place for professional practices will at least let us soak up the damage will force us to consider security and scrutinize our code.
> The immediate costs being run-time performance or qualitative things like maintainability.
I don't think that's true. There's no reason why memory safety has to cost either performance or maintainability.
It feels like there's some sort of fundamental dichotomy because we didn't know how to do it in 1980, and we're still using languages from 1980, but our knowledge has advanced since then.
I probably shouldn't get in on this heated discussion ... but C++ aims to be a safer (higher level) C, at no runtime cost. I think it mostly succeeds at this.
I think C++ can only get so far, without deprecating some of C. Or at least C style code, should come with a big warning from the compiler:
"You are using an unsafe language feature. Are you absolutely sure there is no bug here, and there are no way to use safe language features instead?"
In libraries there might be cases where C-style code like manual pointer arithmetic, c arrays, manual memory management, c-style casts, or *void pointers, uninitialized variables, etc are necessary. But in user code most often there are safer replacements.
There are no such provably safe programs. I assume that you are thinking about programs that have been verified by formal proof systems.
They are not proven to be safe, they are proven to adhere to whatever the proof system managed to prove. This shifts the possible vulnerabilities away from the actual code and onto the proof system and the assumptions that the proof system makes.
For example, take a C program that was proven to be absolutely free of buffer overflows. And therefore labelled by the proof system as 'secure'. But, unbeknownst to the proof system, the C program also interprets user-supplied input as a format string! So it's still vulnerable to format string exploits.
Correct proof systems probably add a huge margin of security compared with the current security standards, but it's not absolute.
There are all kinds of people doing security research. In my experience with some of those people that I've met it doesn't seem like the challenges they have to contend with are not unrelated to the stress caused by the work that they do.
Sort of like how someone who works with giant metal stamping presses is likely to be missing a couple of fingers if they've worked long enough with them (and in an environment where safety regulations are too relaxed).
These are also some of the wonderful people in my life and I enjoy them very much.
I think the reason why every seasoned security researcher I've met also happens to be a heavy drinker or is damaged in some way is because they're in a war of attrition. You want perfect security but it will never happen. These are ultimately mutating, register-based machines. We will probably never be certain that with a sufficient level abstraction a program can be written which will never execute an invalid instruction or be manipulated to reveal hidden information.
Where the theory hits the road is where the action happens.
Which is how we end up with this wide spectrum of acceptable tolerances to security. Holistic verification of systems is extremely costly but necessary where human lives matter. However if someone finds a weird side-channel attack in a bitmap parsing library I think we can be more forgiving.
The whole idea that C programs are insecure by default and can never be secure is where theory wants to ignore the harsh realities. We can write languages with tighter constraints on the verification of the programs they create which will lower the risk of most security exploits by huge margins... but we have to trade something away for the benefit. The immediate costs being run-time performance or qualitative things like maintainability.
What I ultimately think will make these poor security researchers feel better is liability. Having a real system and standard in place for professional practices will at least let us soak up the damage will force us to consider security and scrutinize our code.