> a big chunk of these vulnerabilities would not exist if C and C++ [...] simply didn’t have zero-terminated string, initialized values by default, had a proper pointer+length type thus replacing 90% of pointer arithmetic with easily bounds-checkable code, and had established a culture that discouraged the prevalent ad-hoc style of memory management.
This is Rust's calling-card, so I find this plea for a better lang / eco rather jarring after dismissing Rust for somehow "making the wrong tradeoffs".
No because Rusts bigger calling card is the borrow checker, which adds a lot of complexity besides other things in Rust, and even leads to justifying unsafe (because some optimized correct data structures are just not possible with it).
Second no, because if Rust calling card is that, you can have this alone even in the most hated unsafe C++ if you limit yourself and admit to doing it right. If you quote that sentence you must also quote his calling card, which is about culture and complexity of language:
> In addition to this, I think the most important reason we have so many vulnerabilities (and bugs in general) is completely disregarded in the hunt for “safe” code: culturally tolerated and even encouraged complexity14. In conclusion, putting up with Rusts compile times and submitting to the borrow checker seems like an extreme solution that doesn’t address the most important problem, which is a cultural one. Jai on the other hand is extremely concerned with complexity and tries to get the cultural part right.
And in that regard I agree with him, definitely better there than C/C++, BUT NOT MUCH!
That's why I fully agree, Rust may be not it, and something like Zig, Jai, Carbon or even Herb Sutters safec++2 thing may shine one day more..
Rust is overfocusing on the memory safety part, which adds too much complexity while not even being able to get fully rid of unsafe..
> not even being able to get fully rid of unsafe..
It makes no sense to "get fully rid of unsafe" and this suggests you've gravely misunderstood the problem. Which puts you in good company, Herb Sutter doesn't seem to understand this on his "CppFront" wiki and Bjarne doesn't seem to grasp it in his recent paper about safety either.
Rust's unsafe keyword marks code which programmers intend to be safe but the machine can't see that. For example the Rust compiler can't see why the Linux implementation of Mutex<T> is correct, why would we give out mutable references to anybody who calls this function named "lock" ? The programmers (in this case mostly Mara) know how the Linux futex system call works and their reviewers have concluded the resulting unsafe stanzas, with their commentary, are correct. There will in fact only ever be one mutable reference at a time even though the machine can't see why.
The reason to care so much about memory safety is that you can't have type safety without memory safety, and when you lose type safety most of your other guarantees are destroyed. Languages which claim to care less about memory safety often have a caveat (even if unstated) that all bets are off once you abuse their lack of memory safety to destroy type safety because all their other promises assumed type safety and now they don't have that.
No, the point about unsafe is not just only those fancy few low level implementations thst Rust language has no concept for, but proper high level data structures you cannot realize safe?
But even then, what's different limiting to the safe subset of C++ (haha yeah I have to chuckle a bit) and declare the dame for the necessary unsafe parts there? I really dont get it it seems ;)
Absolutely agree, it’s one thing to get terrified at the complexity of the borrow checker, and another thing to get terrified by the complexity of Unsafe Rust (“the-thing-that-must-not-be-mentioned” in the Rust community).
I think the solution for memory safety (which the fundamental problem stems from us having to deal with a linear address space) can only be fully tacked with a combination of compile-time and runtime features, but in my opinion Rust goes too overboard on the former and sacrifice too much actual language usability.
Really like to see new experiments like generational references (https://verdagon.dev/blog/generational-references) being researched as an alternative to Rust’s ‘type-systems-approach’ towards memory safety.
Or maybe someday we might finally have thorough tagged pointer support in hardware (like what CHERI is doing) and system-level programmers will rejoice in joy.
If I'm reading OP right, their issue with Rust is the borrow checker - none of those features require one. They're asking for a language that's safer than C, not as safe as Rust, but simpler and easier to ram things through. I don't necessarily agree, but I think that's what they're saying.
Yeah, I think Bevy[1] is also a great example of something that fits in well with a lot of Rust's strengths while also showing that it's possible to lean in heavily to very gamedev centric approaches with how it approached ECS.
I think there's is some truth around try to avoid unsafe, that said the times I've dropped down into it I've found myself chasing heap corruption or use-after-free on more than one occasion :).
Some of Jai's AoS/SoA transforms look neat and certainly interested to see what it looks like once it starts opening up more.
But remember all C++ defaults are wrong, and so of course std::span isn't bounds checked
While your Rust slice will yell at you (at runtime if it can't figure it out at compile time) when you try to index into the fifteenth item in a ten item slice, C++ has Undefined Behaviour in this case.
True. I love C++ but think “don’t pay for what you don’t use” should err on the side of safety over speed when it comes to what “pay” means. I’d rather `operator[]` be bounds-checked and occasionally have to call `v.data()[i]` or `v.unchecked_at(i)` when profiling justifies it.
Spans/slices are getting incredibly common in as a fundamental building block in modern (or modernized PLs). C# also has Span<T>, Go and Rust both have slices etc. We are at the point where they should be standardized on ABI level, IMO, before things get too messy compatibility-wise.
There are interesting differences in these types worth thinking about if you're imagining try to standardize them somehow.
C++ std::span pulls double duty, one flavour of std::span, the one you might see more often, is like Rust's slice type [T] in that it consists of zero or more values of some type T. The other though is more like Rust's array type [T; N] where the size N of the span is actually part of the type itself.
Rust's slice is specifically that [T] type, the type system doesn't see any more difference between a [u32] with 1000 entries and a [u32] with 0 entries than it would between a string with "DOG" in it and a string with "CAT" in it, their types are identical.
C# Span<T> deliberately can't live on the heap. The CLR doesn't want to cope with this type, and by ensuring it's part of your program's stack any questions about the lifetime of the Span are obviated and tricky-to-reason about garbage collection problems don't arise.
Go's slices are very strange because Go's arrays are like those in Rust, their size is part of their type - and yet Go's slices can append. This is achieved by actually creating a new array and copying all the data for the slice into the new array whenever Go sees fit.
For a C API, a slice would be a simple { void*, size_t } or { void*, void* } struct, but I guess for memory managed languages this isn't enough information to pin the underlying data into memory (for instance a reference to the underlying 'object' - don't know how such language-specific details could ever be expressed in a 'standard ABI').
Oh, it's simple enough, even with managed languages in the picture - it just needs to be decided once, and then everybody uses it.
The problem is that many existing ABIs don't optimize for small structs as function arguments particularly well, so just bolting it on like that can mean poor performance compared to old-school separate arguments for pointer and length. You want a hard guarantee that something like foo(slice1, slice2) will be passed entirely in the registers, not as two pointers-to-stack.
Is a C++ std::span object like std::string_view in that it can outlive the data it points to? If yes, that's hardly an improvement over a raw C pointer/size pair.
That's just by convention though, right? C++ doesn't prevent me from storing the std::span somewhere so that it outlives the scope of the called function? IMHO it's disappointing that C++ adds new memory management footguns without first fixing the basics (like at least some rudimentary lifetime tracking to help with such situations).
I keep considering writing a `unique_span` and `shared_span`. Really `span` or (a type it’s based on) should have been templated on the pointer type, so a `span<shared_ptr<const T[]>>` for example.
This is Rust's calling-card, so I find this plea for a better lang / eco rather jarring after dismissing Rust for somehow "making the wrong tradeoffs".