As someone who has been using linux quite happily on the desktop for more than 20 years now, I have to say it remains an eternal experiment, feature wise as well as security wise.
I use both privately and professionally and while I accept that security-wise (even with selinux) they feel lacking, feature-wise they far exceed Windows I use as my other is except in gaming experience.
I wish I had something like GrapheneOS on desktops (yes I know about Qubes)
I tried Ubuntu last year, and it felt very limited compared to Windows. It lacked very basic features like face/fingerprint login, hybrid sleep, factory reset, live FDE (or post-installation FDE), fast fractional HiDPI, two-finger right-click, "sudo" on dock etc.
There is https://grsecurity.net/ but it's not free. It's developed by people with much more experience defending against attackers than all of the other projects combined.
Don't know much about SecureBlue but Kicksecure isn't comparable to Qubes at all. It's a hardened distro, not a way to isolate workloads through virtualisation. Depending on what you're trying to achieve they can both fit but they are fundamentally very different in their approach to security.
> I swear to god reading comprehension is approaching zero due to chatgpt.
> I wish I had something like GrapheneOS on desktops
Secureblue is essentially as close to GrapheneOS as Desktop Linux can get. Neither my response nor the original question required qubes comparisons. It was merely mentioned.
> grsecurity® is the only drop-in Linux kernel replacement offering high-performance, state-of-the-art exploit prevention against both known and unknown threats.
While secureblue is a full desktop distro (not just a kernel) that integrates key grapheneos hardening tools like their hardened malloc and forks of their hardened chromium and works with flatpak as a base for hardened application deployment.
You are literally saying that hardening the kernel is the same as having the desktop environment hardened and a basis for app isolation. And to add a cherry on top of that both secureblue and kicksecure use almost all the same hardening additions to the linux kernel as grsecurity.
You do not understand what you are talking about because if you did you'd be embarrassed for how braindead your response is.
You have already proven you don't understand the difference between kernel and userland hardening, why should i bother working for you, google it yourself.
I agree with you fully that mixing up kernel hardening and userspace hardening is one thing, but to suggest this little repo: https://github.com/Kicksecure/security-misc offers the same level of kernel hardening as grsecurity is probably the funniest thing I've read today.
To start with, some of the protections in grsecurity (specifically PaX) offer protections that apply to userspace - specifically MPROTECT and PAGEEXEC. Then there's the many kernel level protections:
KERNEXEC
The 3 different ASLR options (yes, vanilla linux has some watered down versions of these)
Then there's all the options that your little github repo has nothing the eqivilent of (to be fair vanilla linux HAS backported some of these):
[ ] Sanitize all freed memory
[ ] Sanitize kernel stack
[ ] Prevent invalid userland pointer dereference
[ ] Prevent various kernel object reference counter overflows
[ ] Harden memory copies between kernel and userland
[ ] Automatically constify eligible static objects
[ ] Report code regions instrumented for writing to constified data
[ ] Automatically constify eligible allocated objects
[ ] Prevent various integer overflows in function size parameters
[ ] Increase coverage of size overflow checking
[ ] Log missing size overflow hash table entries
[ ] Free more kernel memory after init
[ ] Free more kernel memory after init (verbose mode)
[ ] Generate some entropy during boot and runtime
[ ] Prevent code reuse attacks
[ ] Forward edge defense (deterministic)
└─ Forward edge defense instrumentation method (callabort)
[ ] Backward edge defense (deterministic)
[ ] Backward edge defense (probabilistic)
[ ] Protect kernel stacks from each other
[ ] Report nolocal transformations
[ ] Automatically protect kernel code vulnerable to Spectre v1
[ ] Protect more kernel code vulnerable to Spectre v1
[ ] Automatically protect kernel code vulnerable to Spectre v4
[ ] Protect more kernel code vulnerable to Spectre v4
[ ] Report code found to be potentially vulnerable to Spectre v1/v4
[ ] Convert k\*alloc allocations into their own slabs
[ ] Report autoslab decisions
[ ] Report some potential NULL pointer dereferences
[ ] Deny reading/writing to /dev/kmem, /dev/mem, and /dev/port
[ ] Disable privileged I/O
[ ] Randomize addresses of critical kernel objects
[ ] Harden BPF interpreter
[ ] Disable unprivileged PERF_EVENTS usage by default
[ ] Insert random gaps between thread stacks
[ ] Harden ASLR against information leaks and entropy reduction
[ ] Deter exploit bruteforcing
[ ] Hide kernel symbols
[ ] Randomize layout of sensitive kernel structures
[ ] Use cacheline-aware structure randomization
[ ] Active kernel exploit response
[ ] Dmesg(8) restriction
[ ] Deter ptrace-based process snooping
[ ] Require read access to ptrace sensitive binaries
[ ] Enforce consistent multithreaded privileges
[ ] Disallow access to overly-permissive IPC objects
[ ] Disallow unprivileged use of command injection
[ ] Disable ability of suid root apps to execute unsafe files
[ ] Auto-enable Spectre mitigations for suid-like applications
[ ] Trusted Path Execution (TPE)
I've only grabbed half the options that grsecurity has available as options to turn on, but you get the idea.
Later version of the grsecurity patch also offer kernseal - https://grsecurity.net/featureset/memory_corruption
So yea - to suggest your little github repo that tweaks a few kernel settings is anywhere near the massive security hardening feature set that grsecurity delivers is also, well, embarassing.
The fact that Chromium OS has been teetering on the edge of deprecation/merging with Android/Fuchsia for a decade I think has deterred people from building stuff on top of it.
It also seems to have a lot of new code every year for very few new features. It's as if they get every new intern to rewrite a bit of the innards, and then next summer another intern rewrites it again.
OTOH, it was used for multiple container-optimized distros by now:
First CoreOS, which forked into Flatcar Linux (now funded by Microsoft) and Fedora CoreOS (rewrite from Gentoo/ChromeOS base to Fedora base), and Google's Container-Optimized System (used heavily in Google Kubernetes Engine).
A lot of code to do very little user visible changes is the nature of operating systems. Making light of the work who work on chromeos just makes you sound ignorant.
same! qubes is probably the actual solution for now, but i've seen some grapheneos people work on https://secureblue.dev/ and that seems a lot more "normal"
More power usage, need at least 64G of ram to even remotely use it as it's intended, no hardware acc by default, buggy templates, sleep is broken, uses X11 for display, dom0 is not updated as frequently as it should be and I hardly see any effort in documenting the fact that individual VMs's security matter too. That is not to say I hate Qubes. I think everyone, especially people dealing with sensitive data, should use it
Sure, assess the threat model. I would also take security over convenience anytime, just like you proposed. But for normal users, it's not that simple. I use qubes in my testbox, I love it but it's still quite niche and sadly, far from hitting mainstream
Re:"Eternal experiment"... have you seen Windows 11? Or even 10? The devs can't keep their hands off of the thing, changing, breaking and fixing every component every few months.
This is the attitude that holds Linux back the most. Quick, what is the #1 priority of Linux? "Being better than Windows". Not being good, great, or even amazing? No, as long as it's 0.0001% better than Windows, it can be awful!
And people will say "Yeah, but it is amazing". Then why do so many people feel the need to defend it in terms of _being better than Windows_? Clearly they prioritize the perception of being better than Windows over being actually good, because otherwise they would defend it by pointing out how good it is. Are they all just weirdos, or have they subconsciously picked up on the real but unwritten culture of Linux?
I don’t think we need to “whatabout” Windows. I don’t think anyone would say they are trying too many experiments… actually, Windows feels like it was mostly made by overworked folks doing the bare minimum to not get fired. No time for experiments or caring.
A big part of the difference is that the BSDs are designed by a governing committee. They usually don't have 15 different solutions for the same problem, but instead 2-3 solutions that work well.
Take filesystems, the official filesystems are UFS(1/2) and ZFS. They have GEOM as LVM and LUKS and more.
That being said, the majority of money and development goes into Linux, which by itself may make it a better system (eventually).
I can't help but make the comparison with cryptographic network protocols, where the industry started with a kitchen-sink approach (e.g. pluggable cipher suites in TLS) and ended up moving towards fixed primitives (e.g. Wireguard mostly uses DJB-originated techniques, take them or leave them).
The general lesson from that seems to be that a simpler, well-understood, well-tested and mostly static attack surface is better than a more complex, more fully-featured and more dynamic attack surface. I wonder whether we'll see a trend towards even more boring Linux distributions which focus on consistency over modernity. I wouldn't complain if we did.
> A big part of the difference is that the BSDs are designed by a governing committee. They usually don't have 15 different solutions for the same problem, but instead 2-3 solutions that work well.
The right comparison is not between a particular BSD and Linux, its between a particular BSD and a Linux distro.
> A big part of the difference is that the BSDs are designed by a governing committee
While I cannot agree nor disagree on the quality of BSDs (haven't used one in 20 years), I find it funny that in this case a design by committee is proof of quality.
I guess it's better than design by headless chicken which is how the Linux user-space is developed. Personally, I am a big fan of design by dictatorship, where one guy at the top either has a vision or can reject silly features and ideas with strong-enough words (Torvalds, Jobs, etc.) - this is the only way to create a cohesive experience, and honestly if it works for the kernel, there's no reason it shouldn't work in userspace.
> While I cannot agree nor disagree on the quality of BSDs (haven't used one in 20 years), I find it funny that in this case a design by committee is proof of quality.
I don't think "design" is correct word: organized, managed, or ran perhaps.
> The FreeBSD Project is run by FreeBSD committers, or developers who have direct commit access to the master Git repository.[1] The FreeBSD Core Team exists to provide direction and is responsible for setting goals for the FreeBSD Project and to provide mediation in the event of disputes, and also takes the final decision in case of disagreement between individuals and teams involved in the project.[2]
There is no BDFL, à la Linux or formerly Python: it's a 'board of directors'. Decisions are mostly dispute / policy-focused, and less technical for a particular bit of code.
They decide what gets included in the default distribution, they set the goals and provider sponsorships for achieving them.
So yes, board of directors is probably more fitting.
And then of course you have the people with a commit bit. They can essentially work on whatever they like, but inclusion into the main branch is still up to the core team.
There was a huge debate some years ago when Netgate sponsored development/porting of WireGuard to FreeBSD, and the code was of a poor quality, and was ultimately removed from FreeBSD 13.
They are still missing something like capability based security like iOS and Android have where apps have to be granted access to use things like files or the camera. It may have been considered secure a couple decades ago, but they have fallen behind the competiton.
FreeBSD literally has Capsicum: https://en.wikipedia.org/wiki/Capsicum_(Unix) That might be the most pure capability system out of all of them, though it's not something that works without application modification (yet). Android and iOS applications can automatically work with the native capability framework because they rely on higher-level SDK APIs. But AFAIU those capability systems are very coarse-grained, in the sense that it's difficult leverage the capability system internally within a single application. And keeping lower-level APIs (e.g. for C and POSIX filesystem I/O) nominally working (if at all) requires some impure hacks. All of which makes them very similar to FreeBSD Jails or Linux containers in that respect.
I wouldn't consider any of these systems "secure", though, as a practical matter. In terms of preventing a breakout, I'd trust an application on OpenBSD with strict pledge and unveil limits, or a Linux process in a classic seccomp sandbox (i.e. only read, write, and exit syscalls), more than any of those other systems. Maybe Capsicum, too, but I'm not familiar enough with the implementation to know how well it limits kernel code surface area. But any application that can poke at (directly or indirectly) complicated hardware, like the GPU, is highly problematic unless there are proofs of correctness for any series of inputs that can be sent by the process (which I don't think is the case).
IMO, the real problem with trying to enforce capability-based systems on desktop/server environments is the correct API isn't implemented. `capabilities(7)` is only a tiny subset of `credentials(7)`, `PR_SET_NO_NEW_PRIVS` is an abomination, `SCM_RIGHTS` has warts, and `close_range` is fundamentally braindead.
We need at least the following sets: effective, permitted, bounding (per escalation method?), and the ability to make a copy of all of the preceding to automatically apply to a child (or to ourselves if we request an atomic change). Linux's `inheritable` set is just confusing, and confusion means people will use it wrong. At least we aren't Windows.
No, that requires explicit changes by programs to use meaning that malware can ignore it and steal your browser's cookies and take secret photos with your webcam.
My original statement is about how users have to explicitly give programs access to the files and the webcam before they can use them. This is missing.
I ran Open Solaris for a while on my Laptop and it's quite nice. However the lack of support by practically any software vendor made many things a pain.
Since then even more stuff went to the Web, but I really I doubt Illumos got any extra traction.
Most of our server infrastructure runs on illumos at $work. SmartOS/Triton handles our "cloud" and OmniOS runs our storage. The linux monoculture problem luckily can still be handled with zones and bhyve, and I do trust illumos developers' competence to deliver good quality secure software a lot more than linux developers' as well.
Now if FreeBSD (or indeed illumos) would get CUDA-support we could stop using linux for GPU nodes too.
It is possible, yes, but I would prefer to have full linux-free support for production use. There is on-going work for FreeBSD Cuda, though[0]. Just have to wait and see.
how much harder is container escaping compared to vm escaping? i understand that containers are not truly meant to be security boundaries but they are often thought of and even used as such.
> how much harder is container escaping compared to vm escaping?
The answer heavily depends on your configuration. Unprivileged with a spartan syscall filter and a security profile is very different than privileged with the GPU bindmounted in (the latter amounts to a chroot and a separate user account).
Hence if I ever get money for an infrastructure pentest, I want to include a scenario that scares me a bit: The hijacked application server. The pentesters give me a container with whatever tooling they want and a reverse shell and that gets deployed in the dev-infrastructure, once privileged and once unprivileged, both with a few secrets an application server would have. I'd just reuse a deployment config from some job. And then have at it.
Situational but if you're in default configurations it's comparable. Both will need some form of unknown vuln. It boils down to wether you trust more the linux namespacing logic and container runtime glue or the hypervisor logic.
Let us not pretend other OS are flawless as well. Microsoft is constantly patching and Apple has been the source of so many hacks that thousands of VIPs were affected and a person was murdered.
What a weird comment - if Apple software had less exploits then the murder would have been averted? And those 'VIPs', whoever they are - would it be less significant if there were normies? I sincerely hope none of my coding mistakes ever causes a VIP to be murdered.
We're talking about a local privilege escalation here.
That assumes:
1) Attacker already have an account on the system
2) The app `udisks` is installed on the system.
Everyone is fighting the same battle and it's a good thing. It is happening because the rest of the system is hard enough to attack these days. This is true for all major OS:es.
Only fanboys bend reality to make this into a good-vs-bad argument.
Local root privilege escalation is mostly irrelevant these days. It’s only useful as part of an exploit chain, really. It’s not like shell servers are still around.
The article was about two issues that combine to make a single local-privilege-escalation, so the PAM thing isn't a separate exploit chain, it's just part of getting local root in this vulnerability.
What the parent poster meant is that you first need a way to run arbitrary code before local privilege escalation matters, so the exploit chain has to include _something_ that gets you local code execution.
I tend to agree with the parent poster, for most modern single-user linux devices, local privilege escalation means almost nothing.
Like, I'm the only user on my laptop. If you get arbitrary code execution as my user, you can log my keystrokes, steal my passwords and browser sessions, steal my bitcoin wallet, and persist reasonably well.... and once you've stolen my password via say keylogging me typing `sudo`, you now have root too.
If you have a local privilege escalation too, you still get my passwords, bitcoin wallet, etc, and also uh... you can persist yourself better by injecting malware into sshd or something or modifying my package manager? idk, seems like it's about the same.
> ...for most modern single-user linux devices, local privilege escalation means almost nothing.
I haven't actually looked at the numbers, but I strongly suspect that it's true that the overwhelming majority of single-user Linux devices out there are Android devices. If that's true, then it's my understanding that Android does bother to fairly properly sandbox programs from each other... so an escalation to root would actually be a significant gain in access.
Android is nearly always a single user system in the sense that TheDong was using. Look at the context a little further down in the guy's comment:
> Like, I'm the only user on my laptop. If you get arbitrary code execution as my user, you can log my keystrokes, steal my passwords and browser sessions, steal my bitcoin wallet, and persist reasonably well.... and once you've stolen my password via say keylogging me typing `sudo`, you now have root too.
In this context, "single user system" means either "single human using the system", or "one human physically sat in front of the system's 'console' at one time". It's in contrast with systems that have multiple human users logged in and using the system simultaneously. So, nearly 100% of "single user systems" of this type will have software running under different "user" accounts on the system, but still meet the definition, because those accounts are actually "machine" or "service" accounts.
I do think that this overload of the terminology is bogus and confusing. It should be called something like "single seat system", but here we are.
> Android security is tight
Yep. That's what I said: "[I]t's my understanding that Android does bother to fairly properly sandbox programs from each other... so an escalation to root would actually be a significant gain in access."
The context is that on a traditional Linux laptop/desktop you are in fact running everything as one user.
Firefox, the desktop environment, your password manager and even `sudo` are traditionally all running as your own user.
This is not true in Android whatsoever.
Being multi-seat or not has little security implications - most traditional Linux systems can handle multi-seat but they’re still limited in security by running everything as a single user
And no nearly all 100% of Linux systems do not run proper multi-user configurations because none of the most popular distributions ship like that. Not in the context of desktop usage anyway.
Servers do use multi-user configuration but that’s not what we’re talking about here
> The context is that on a traditional Linux laptop/desktop you are in fact running everything as one user.
Um. Have you ever run 'ps aux', guy? At minimum you're running everything as two users (root and your user account), and probably three to twenty more, depending on what you have installed. I know that on my desktop system
ps axo user | sort -u | grep -v USER | wc -l
returns 12. Even back in the late 1990s/early 2000s, the default method of operation for Linux systems was to use multiple machine accounts.
> And no nearly all 100% of Linux systems do not run proper multi-user configurations because none of the most popular distributions ship like that. Not in the context of desktop usage anyway.
Most Linux systems don't run every single program as a separate Linux user. That doesn't mean that those systems are "in fact running everything as one user".
> BUT YOUR ACTUAL DESKTOP SESSION RUNS AS ONE USER.
Yes, the things I personally run nearly always run under my user account. I've never said otherwise. I've also said that Android doesn't do things this way, and that that's a good thing. As I mentioned in my comment to TheDong: [0]
> [I]t's my understanding that Android does bother to fairly properly sandbox programs from each other... so an escalation to root would actually be a significant gain in access.
And my comment to you: [1]
> In this context, "single user system" means either "single human using the system", or "one human physically sat in front of the system's 'console' at one time". ... So, nearly 100% of "single user systems" of this type will have software running under different "user" accounts on the system, but still meet the definition, because those accounts are actually "machine" or "service" accounts.
And from that same comment:
> > Android security is tight
> Yep. That's what I said: "[I]t's my understanding that Android does bother to fairly properly sandbox programs from each other... so an escalation to root would actually be a significant gain in access."
Moving on.
> Yeah sure init runs as root, and maybe you have background services that run as some other user.
Correct. That's why I said:
> Most Linux systems don't run every single program as a separate Linux user. That doesn't mean that those systems are "in fact running everything as one user".
Before you succumb to another fit of rage, take a few deep breaths, review my previous comments, and notice my critique about how Android does things, as well as my commentary about how Android is also a "single-user system" (as TheDong was using the term), and how I think the term is pretty bad, but it's the one that's widely used.
I have no idea if Android uses udisks. It has been something like a decade since I last looked at 'ps' output on an Android machine, so any information on the topic I might have had has faded away with time.
this type of exploits are goldmines for attackers, it means they have a window of a few month to years to turn any basic access into root. It doesn't have to be a super complex exploit chain, anyone running wordpress botnets it going to add this to their arsenal
An attacker doesn't need a shell server to run code locally, you chain it with an exploit to a service and you have root and now have lateral attack capabilities.