Linux/Unix just trust that if you want something persisted with certainty you'll do an fsync, if you do it will absolutely guarantee you're not losing that. It will absolutely make sure that your filesystem doesn't get corrupted by a power loss but if you didn't fsync your write you had no place believing it was persisted.
Doing that IMHO matches real world use much better.
If I do a compile and get a power outage I don't care
if some object files get lost. If I do an INSERT in a DB I do care a lot but the DB knows that and will fsync before
telling me it succeeded. So making sync explicit
gives you both great performance and flexibility.
Doing that IMHO matches real world use much better. If I do a compile and get a power outage I don't care if some object files get lost.
The real world use is someone working on a document, using Save periodically, and expecting whatever was last saved to survive the next outage, not some arbitrarily old version.
As for kernel panics, with iOS likely sharing most if not all of it's kernel code with macOS I'd be surprised if Apple hasn't had an iPhone macOS build since before they released the first iPhone.
Instead of fusing them, shouldn't it be possible to speculate that it will not overflow, process the check on a separate slow path and do a roll back in case it did overflow?
I wonder how this interacts with branch prediction. Since overflows should happen very rarely I guess the branch on overflow should almost always predict as non taken.
So wouldn't it be possible to have a "branch if add would overflow" instruction or even canonical sequence that a higher end CPU can completely speculate around and just use speculation rollback if it overflows?
I think an important design point here is that the languages that need a lot of dynamic overflow checks are primarily used on beefier CPUs so if you can get around the code size issue, making it performant only on more capable designs is fine since the overflow check will be rare on simpler CPUs.
I don’t think that beefier cpu and overflow checks are that related. I mean, you’re right, I just want to place some limits on how right you are.
1. Folks totally run JS and other crazy on small CPUs.
2. Other safe languages (rust and swift I think?) also use overflow checks. It’s probably a good thing if those languages get used more on small cpus.
3. The C code that normally runs on small cpus is hella vulnerable today and probably for a long time to come. Compiling with sanitizer flags that turn on overflow checks is a valuable (and oft requested) mitigation. So theres a future where most arithmetic is checked on all cpus and with all languages.
And yeah, it’s true that the overflow check is well predicted. And yeah, it’s true that what arm and x86 do here isn’t the best thing ever, just better than risc-v.
By default yes, but you can enable overflow checking in release mode (it’s a conf / compiler flag), and it has standard functions for checked, wrapping, and saturating ops.
Yeah I know about 1 e.g. also MicroPython, no idea if that's used outside DIY though.
I agree about rust but I would think that with much stronger type safety and static compilation it should be able to remove a lot more of the overflow checks and most that remain would be needed in correct C too. At least that's what I learned from my compilers prof who worked on Ada compilers for many years and that should be quite similar.
But maybe that's my biased hope as I really really hate working with dynamic languages.
The current world record holder (in the published literature) for branch prediction is TAGE and its derivatives. The G stands for Geometric. It is composed of a family of global predictors that increase in length with a geometric progression. That's somewhat relieving since it means that the storage growth is not unlike that of mipmapping in computer graphics. A small constant k times maximum history length N.
But to a first approximation, if you double the density of conditional branches in the program, then you will need to roughly double the size of the branch prediction tables to get the same performance, even if all of them are correctly predicted 100% of the time.
I would argue that the stable syscall interfaces is one of the main drivers behind Linux success. Containers as we know them would be quite different if they needed to somehow merge the kernel version appropriate .so files into the fs. They would forever remain just sandboxes of the essentially the same distro as the host just like they are on Solaris/FreeBSD.
> I would argue that the stable syscall interfaces is one of the main drivers behind Linux success. Containers as we know them would be quite different if they needed to somehow merge the kernel version appropriate .so files into the fs. They would forever remain just sandboxes of the essentially the same distro as the host just like they are on Solaris/FreeBSD.
I suspect the order of cause and effect is reversed. Linux is amazingly successful and as a result containers were shaped to fit the capabilities and needs of Linux. The reason Linux has a strong syscall ABI is because it doesn't have a userspace. Glibc is only loosely related to the kernel and cannot be the kernel's ABI compatibility layer, as they are wholly independent projects.
Another way of looking at this is that the overhead and additional abstraction of containers is a workaround for the deficiencies of Linux jail/zone facilities (i.e., it doesn't have them; you have to carefully piece together a secure sandbox out of various cgroup and namespace components).
> Also something like WSL would be impossible
Some interesting things about that:
1. WSLv1 was a partial Linux syscall emulator, sure.
2. Microsoft gave up on that approach and WSLv2 is just an Ubuntu VM running in HyperV. So maybe it was impossible anyway — Linux just provides a ton of system calls and it will always be difficult to faithfully implement all of them. And as some side threads have noted, Linux likes to break ABI of sysfs files all the time. But those don't count, for some reason.
3. FreeBSD has a linux syscall emulation layer (that long predates WSL) using essentially the same premise as WSLv1. You're correct that this style of implementation (syscall ABI) only works due to Linux's syscall ABI choices.
4. However, something like WSL is not impossible against a shlib ABI. The canonical example here is WINE. WINE implements (much of) the Windows NT shared library (DLL) level stable ABI. The same could be done for other systems that provide ABI stability at the DLL level, such as MacOS or FreeBSD.
Maybe I'm missing something but e.g. on the Android clock you can dismiss an upcoming alarm. So now if you wake up, dismiss it and it never rings that kind of proofs you really woke up before, right?
I think the most expensive isn't the address translation itself but TLB housekeeping when mappings get remapped or invalidated. This is especially true with virtualization where the hypervisor often needs to do extra work like (un)pinning guest pages, translating guest real addresses to host real and reissuing TLB flushes.
Afaic the upcoming Porsche Taycan has a two speed transmission. In the Taycan it's used for more efficiency at >160 kph or so something an American car really doesn't need and it costs them a slower 0-100 kph. Tesla uses another trick to get better acceleration to 100 kph. They have two motors and one is geared for high efficiency at US highway cruising speeds (so ca 120 kph).
In general electric motors have a very wide band of excellent efficiency and an even wider band of great efficiency and they are orders of magnitude more efficient than ICE in any part of their operational torgue/speed range.
I deleted my account when they purposefully made their Linux version not launch on any other FS than ext4. I use btrfs and that supports all the features they use, still Dropbox refused to work with it. I complained to their support and then deleted my account. Then again I guess having a private server with NextCloud means I'm pretty privileged.
I hear this claim almost every day yet its been years since I've actually felt my normal work computers being slow. Hell, even booting an OS in an emulator running in a browser proper software still runs blazingly fast. It's just that some people seem to use an awful lot of terrible software but I don't seem to be one of those people
You were probably also using several thousand dollar computers.
Try making the same claim using a thrift store laptop for < $200 which is all the computing power a whole lot of people have access to (if any). Yes computing is fine (and a little better than 20 years ago) for software made and used by wealthy people. You have to slide down the curve a bit to discover the frustration with everything being terribly hungry for ever-increasing resources.
Isn't that directly contrary to GP's claim of "look at how ridiculously powerful our computers are and they're still so slow" though? Of course slow computers are slow.
Just the other day there was a thread comparing start times for LibreOffice, where 3 seconds was considered good. Three seconds on modern hardware is a damned eternity. How much does LibreOffice do that Word 5.0 doesn't, really? Yet the latter would probably start inside an emulator faster than the former does natively.
LibreOffice and its predecessors (OpenOffice and StarOffice) have always been considered bloated for as long as I could remember, which is nearly 20 years. While I'm very grateful for having a free, open source office suite, I've always found OpenOffice and LibreOffice to be rather clunky no matter what platform I'm using. This is especially true on macOS where a lot of the UI elements of LibreOffice don't fit in with regular Mac applications, although thankfully things have progressed from the days of having to run the even-slower NeoOffice on Mac OS X Tiger, which used the Java runtime (I remember it taking up to 15 seconds to load on my 1.83GHz Core Duo MacBook back in 2006). It might have to do with how LibreOffice decided to handle its UI elements. Writing cross-platform software is hard, and it often results in making compromises that affect conformance with specific platform UI guidelines and performance (consider how controversial Electron apps are with some users, for example).
But even with LibreOffice's clunkiness, I still use it at home. While I am partial to Apple Keynote for presentations, I prefer LibreOffice Writer and LibreOffice Calc to Apple Pages and Apple Numbers, respectively. And whenever I'm on a Linux or FreeBSD machine, LibreOffice is available for me to use, while iWork and Microsoft Office are not options.
> How much does LibreOffice do that Word 5.0 doesn't, really?
It has to support and parse a billion more formats. Including the fact that MS word file formats are in some cases literal dumps from word's memory (iirc).
Rendering things properly is also a difficult problem to solve. Look how difficult it has been for much of the 21st century, to get a webpage uniform across all major browsers, and then realised that you not only have to display the same across all major browsers, but also be 30 years backwards compatible.
On a modern OS (like anything in the last 25-odd years that has demand paging) that code isn't even loaded into RAM before it's used in many situations.
While that is an additional load if you deal with those formats, the root of things like startup and runtime bloat lie elsewhere.
When do you want to incur the overhead? Do you want a user to wait once for 40 seconds, or wait 10 extra seconds for something to load/save because you're loading shit on the fly. It's a question of trade-offs.
Doing that IMHO matches real world use much better. If I do a compile and get a power outage I don't care if some object files get lost. If I do an INSERT in a DB I do care a lot but the DB knows that and will fsync before telling me it succeeded. So making sync explicit gives you both great performance and flexibility.