So the exploiters have deprecated that version of spyware and moved on I see. This has been the case every other time. The state actors realize that there's too many fingers in the pie (every other nation has caught on), the exploit is leaked and patched. Meanwhile, all actors have moved on to something even better.
Remember when Apple touted the security platform all-up and a short-time later we learned that an adversary could SMS you and pwn your phone without so much as a link to be clicked.
Each time NSO had the next chain ready prior to patch.
I recall working at a lab a decade ago where we were touting full end-to-end exploit chain on the same day that the target product was announcing full end-to-end encryption -- that we could bypass with a click.
It's worth doing (Apple patching) but a reminder that you are never safe from a determined adversary.
My iOS devices have been repeatedly breached over the last few years, even with Lockdown mode and restrictive (no iCloud, Siri, Facetime, AirDrop ) MDM policy via Apple Configurator. Since moving to 2025 iPad Pro with MIE/eMTE and Apple (not Broadcom & Qualcomm) radio basebands, it has been relatively peaceful. Until the last couple of weeks, maybe due to leakage of this zero day and PoC as iOS 26.3 was being tested.
I would happily pay Apple an annual subscription fee to run iOS N-1 with backported security fixes from iOS N, along with the ability to restore local data backups to supervised devices (which currently requires at least 2 devices, one for golden image capture and one for restore, i.e. "enterprise" use case). I accept that Apple devices will be compromised (keep valuable data elsewhere), but I want fast detection and restore for availability.
GrapheneOS on Pixel and Pixel Tablet have been anomaly free, but Android tablet usability is << Apple iPad Pro.
USB with custom Debian Live ISO booted into RAM is useful for generic terminal or web browsing.
could you please elaborate on how you determine that your devices have been breached? e.g. referring to "anomaly free" makes it sound like you might witnessing non-security related unexpected behaviour? sorry for the doubt, i'm curious
Explained at length below: after subjective indicator of possible breach, by monitoring, allowlisting and then deleting outbound network traffic sources (i.e. apps) on the device, then look closely at any remaining, non-allowlisted traffic, which should be zero.
First idea if great honestly - lots of vendors do this. I use Firefox long term stable and Chrome offers this for enterprise customers. Windows even offers multiple options of this (LTSC being the best by far).
Would also make a great corporate / government product - I doubt they care about charging the average consumer for such a subscription (not enough revenue) but I can see risk averse businesses and especially government sectors being interested.
By definition you will have access to things Apple wont publish or support at subsidized rates below the fully loaded hourly cost of a senior engineer.
Because you will be paying the full unsubsidized rate for any support needed for features not available to the mass market.
Its like how IBM will gladly send a team of senior engineers to help enterprise clients resolve every last possible request.
Edit: As compared to mass market features, where the economics dont work unless they’re close to 100% certain most users wont require any costly support.
- Signup for Apple Enterprise account with direct billing
- Buy one hardware device direct via Enterprise account
- Buy one MDM license for the hardware device
- Sign contract for support at $500/hr, no minimum commitment
- Get access to docs & tools for iOS 18 on new hardware (don't need support)
Apple Enterprise Developer account requires 100 employees minimum, but Apple Enterprise does not.
Just to save everyone the read, reading through the replies, this person is very clearly paranoid and has no clear evidence of an actual breach. I have zero idea why people are actually engaging with this.
This thread (on a story about 10 year old 0-day that exposed 2 billion devices to potential breach!) has many comments questioning the mere possibility of repeated breach, yet not a single comment engaging the point of my original post -- that Apple's 2025 introduction of MIE/eMTE changed the observable device behavior vs. Apple devices of the previous five years. On the new iPad Pro, MIE was shipped alongside Apple's $1B investment in modem technology to replace Qualcomm cellular and Broadcom WiFi/BT radios used on billions of existing devices.
Memory Integrity Enforcement (MIE) is the culmination of an unprecedented design and engineering effort, spanning half a decade, that combines the unique strengths of Apple silicon hardware with our advanced operating system security to provide industry-first, always-on memory safety protection across our devices — without compromising our best-in-class device performance. We believe Memory Integrity Enforcement represents the most significant upgrade to memory safety in the history of consumer operating systems.
> has no clear evidence of an actual breach
If the perceived breaches during 5 years of using multiple generations of Apple devices were due to methodology errors leading to false positives, why did they stop after moving to 2025 Apple hardware with MIE and Apple-only radio basebands?
Presence of one or more: unexpected outbound traffic observed via Ethernet, increased battery consumption, interactive response glitching, display anomalies ... and their absence after hard reset key sequence to evict non-persistent malware. Then log review.
What are examples of logs that you're considering IOCs? The picture you are painting is basically that most everyone is already compromised most of the time, which is ... hard to swallow.
By minimizing apps on device, blocking all traffic to Apple 17.x, using Charles Proxy (and NetGuard on Android) to allowlist IP/port for the remaining apps at the router level, and then manually inspecting all other network activity from the device. Also the disappearance of said traffic after hard-reset.
Sometimes there were anomalies in app logs (iOS Settings - Analytics) or sysdiagnose logs. Sadly iOS 26 started deleting logs that have been used in the past to look for IOCs.
How did you determine that a connection was malicious? Modern apps are noisy with all of the telemetry and ad traffic, and that includes a fair amount of background activity. If all you’re seeing are connections to AWS, GCP, etc. it’s highly unlikely that it’s a compromise.
Similarly, when you talk about it going away after a reset that seems more like normal app activity stopping until you restart the app.
Traffic was monitored on a physical ethernet cable via USB ethernet adapter to iOS device.
Charles Proxy was only used to time-associate manual application launch with attempts to reach destination hostnames and ports, to allowlist those on the separate physical router. If there was an open question about an app being a potential source of unexpected packets, the app was offloaded (data stayed on device, but app cannot be started).
MDM was not used to redirect DNS, only toggling features off in Apple Configurator.
Surely you used several USB Ethernet adapters to rule them out as being the source as well right? Those types of dongles are well known for calling home.
Good observation :) Multiple ethernet adapters: Apple original (ancient USB2 10/100), Tier 1 PC OEM, plus a few random ones. Some USB adapters emit more RF than others.
It excluded the published hostnames for services and CDNs (some of which resolved to GCP, Akamai, etc) published by Apple for sysadmins of enterprise networks, https://news.ycombinator.com/item?id=46994394. It's indeed possible that one of the unknown destination IPs could have been an undocumented Apple service, but some (e.g. OVH) seem unlikely.
So how did you identify this as a breach? I'm struggling to find this credible, and you've yet to provide specifics.
Right now it comes across as "just enough knowledge to be dangerous"-levels, meaning: you've seen things, don't understand those things, and draw an unfounded conclusion.
Feel free to provide specifics, like log entry lines, that show this breach.
Please feel free to ignore this sub-thread. I'm merely happy that Apple finally shipped an iPad that would last (for me! no claims about anyone else!) more than a few weeks without falling over.
To learn iOS forensics, try Corellium iPhone emulated VMs that are available to security researchers, the open-source QEMU emulation of iPhone 11 [1] where iOS behavior can be observed directly, paid training [2] on iOS forensics, or enter keywords from that course outline into web search/LLM for a crash course.
I worked at Corellium tracking sophisticated threats. Nothing you’ve posted is indicative of a compromise. If you’re convinced I’d be happy to go through your IOCs and try to explain them to you.
Thanks. In this thread, I was trying to share a positive story about the recent iPad Pro _NOT_ exhibiting the many issues I observed over 5 years and multiple generations of iPhones and iPad Pros. If any new issues surface, I'll archive immutable logs for others to review.
With the link I provided, a hacker can use iOS emulated in QEMU for:
• Restore / Boot
• Software rendering
• Kernel and userspace debugging
• Pairing with the host
• Serial / SSH access
• Multitouch
• Network
• Install and run any arbitrary IPA
Unlike a locked-down physical Apple device. It's a good starting point.
I'm much more convinced that you're competent in the field of forensics. But I still don't think suspicious network traffic can be categorically defined as a 'device breach.'
For all you know, the traffic you've observed and deem malicious could just as well have been destined for Apple servers.
Apple traffic goes to 17.0.0.0/8 + CDNs aliased to .apple.com, which my egress router blocks except for Apple-documented endpoints for notifications and software update, https://support.apple.com/en-us/101555
They said upthread that they had blocked 17.0.0.0/8 ("Apple"), but maybe there are teams inside Apple that are somehow operating services outside of Apple's /8 in the name of Velocity? I kind of doubt it, though, because they don't seem like the kind of company that would allow for that kind of cowboying.
I don't doubt it in the slightest. Every corporate surveillance firm—I mean, third-party CDN in existence ostensibly operates in the name of 'velocity'.
There’s no hard evidence that you’ve put forward that you’ve been breached.
Not understanding every bit of traffic from your device with hundreds of services and dozens of apps running is not evidence of a breach.
Have you found unsigned/unauthorized software? Have you traced traffic to a known malware collection endpoint? Have you recovered artifacts from malware?
Strong claims require strong evidence imo and this isn’t it.
As mentioned elsewhere in this thread, traffic from each iOS app was traced via Charles Proxy, the endpoints allowlisted for normal behavior, and finally the app was offloaded so it could not generate any traffic from the device. Over time, this provided a baseline of known outbound traffic from the device, e.g. after provisioning a new device with a small number of trusted apps.
I agree with other posters that you seem to be capable of network level forensics, but you have said nothing to back up what you consider a device breach other than 'some cloud destined network traffic which disapears after a hard reset'.
In my experience of forensic reports, this link is tenuous at best and would not be considered evidence or even suspected breach based on that alone.
I don't think that proves they've been breached. Are you sure your not just seeing keep alive traffic or something random you haven't taken into account ?
From another comment - I switched phone to Pixel and it has worked well, with a separate profile for apps that require Google Play Services.
> GrapheneOS on Pixel and Pixel Tablet have been anomaly free, but Android tablet usability is << Apple iPad Pro.
iPad Pro with Magic Keyboard and 4:3 screen is an engineering marvel. The UX overhead of Pixel Tablet and inconsistency of Android apps made workflows slow or even impractical, so I eventually went back to iPad and accepted the cost/pain of re-imaging periodically, plus having a hot-spare device,
> restrictive (no iCloud, Siri, Facetime, AirDrop ) MDM policy via Apple Configurator
MDM? That doesn't surprise me. Do you want to know how _utterly_ trivial MDM is to bypass on Apple Silicon? This is the way I've done it multiple times (and I suspect there are others):
Monterey USB installer (or Configurator + IPSW)
Begin installation.
At the point of the reboot mid-installation, remove Internet access, or, more specifically, make sure the Mac cannot DNS resolve: iprofiles.apple.com, mdmenrollment.apple.com, deviceenrollment.apple.com.
Continue installation and complete.
Add 0.0.0.0 entries for these three hostnames to /etc/hosts (or just keep the above "null routed" at your DNS server/router.
Tada. That's it. I wish there was more to it.
You can now upgrade your Mac all the way to Tahoe 26.3 without complaint, problem, or it ever phoning home. Everything works. iCloud. Find My. It seems that the MDM enrollment check is only ever done at one point during install and then forgotten about.
Caveat: I didn't experiment too much, but it seems that some newer versions of macOS require some internet access to complete installation, for this reason or others, but I didn't even bother to validate, since I had a repeatable and tested solution.
16e still uses a Broadcom chip for WiFi + Bluetooth, though. iPhone Air is currently the only iPhone that uses both Apple-designed baseband + WiFi/BT chips.
Linux has few defenses against the compromise of individual programs leading to the whole system being compromised. If you stick to basic tools (command line) that you can fully trust, it might be somewhat resistant to these types of attacks. The kernel might be reasonably secure but in typical setups, any RCE in any program is a complete compromise.
Things like QubesOS can help, but it's quite high-effort to use and isn't compatible with any phone I know of.
Meh. It’s up to Apple to write secure software in the first place. Maybe if they spent more time on that instead of fucking over their UI in the name of something different, and less time virtue signalling, their shit would be more secure.
And yes because their UI folks should be spending time on the kernel. What next? If Apple didn’t have so many people working at the Genius Bar they could use some of those people to fix security vulnerabilities?
Are you suggesting that money spent on marketing - to the extent that it doesn't actually increase market share/sales - couldn't be spent on hardening or vulnerability payouts, etc?
Apple doesn't have unlimited money. It all gets allocated somewhere. Allocating it in places that don't improve security or usability or increase sales is, in this sense, a wasted opportunity that could be more efficiently allocated elsewhere.
> Are you suggesting that money spent on marketing - to the extent that it doesn't actually increase market share/sales - couldn't be spent on hardening or vulnerability payouts, etc?
If Apple had unlimited money they’d just buy the exploit makers at whatever asking price. Or they’d set exploit bounties at a price guaranteed to outbid others etc.
No, just like any other company they don’t have unlimited money and my point stands.
Really? You don’t think Apple could “afford” to set aside $500 million dollars for instance to pay off exploit makers? Less than 0.5% of their profit? Or even $1 billion? Less than 1% of their profit?
Ofc they could afford to, but they don’t. They could alo afford to if they had unlimited money, but in the latter case by definition they’d lose nothing by actually buying.
Given the absurdity of the scenario and its contrivance though I’m not sure what your point is. More money spent on security is good is my point. And if they had more money they’d have more money to spend on security. And if they didn’t spend money on dumb shit like virtue signaling then they’d have more money. That’s the reasoning.
My point is that it’s silly to say that Apple doesn’t have enough money left over after spending money on marketing to pay off people who find security vulnerabilities if they have $110 billion in profit after spending money on marketing.
If you had to spend 0.5% of your income for something in a year, would that adversely affect how you chose to spend the other 99.5%?
My ethics are that certain people will die in certain circumstances and I’m okay with that. I also have no issues working on something that may result in a person’s death at a later stage. One example might be that if I worked on an automobile assembly line it might occur to me that the car I’m working on would at some point crash and the occupants be killed. But why would I care? There’s a chain of causation that you can surely understand, one that in this case would be broken many times before then (assuming I wasn’t negligent in assembling the car).
But again, your condescending tone proves my point. You and I don’t have the same values. That’s okay. But keep yours to yourself and I’ll keep mine to myself, right? That’s my point.
You're confusing ethics with your own personal views. Ethics is a subject concerning right or wrong. It's neither subjective nor objective - it's just a particular subject encompassing particular issues. Your personal opinion on a particular issue might go some way toward describing what YOU think is ethical behaviour. That's subjective. It describes a factual state (viz., your opinion about something). My opinion may be very different from yours. My opinion is also subjective.
If you think never harming any person is the highest human aspiration, then great! I wish you well on that journey. I disagree though, and personally - as a matter of my own morality and philosophy about the world - I think the earth would be a much better place with maybe 1/2 the current population (assuming we could cull the right people). Avoiding causing harm to others isn't really something I care about, and I think there are more important and more interesting things to worry about. I also think killing is absolutely justified under certain conditions and I also think the world would be objectively better off if certain people didn't exist. We disagree about this, but that doesn't mean we aren't both acting ethically. We just have very different ideas about what is good and bad and right and wrong.
Both of us can act ethically despite holding those contrary positions and stay within our own logical frameworks. I hope that makes sense to you.
Now, once again the main point was that doing work for the police or hacking shit for governments is a legitimate occupation and is legal, even if it leads to somebody being executed or arrested or deported (in fact, those are also legitimate things that plenty of people have no problems with). Laws generally reflect society's overall views on some subject matter. Feel free to Google social facts and Durkheim and Hart and the rule of law and theory of laws. Stating such is to state objective facts. If you dislike those occupations, that's cool - some people dislike prostitution, but it's a legitimate and legalised occupation in many places. But your opinion on the matter doesn't delegitimise it, and frankly nobody wants to hear your casting judgment on others based on your own personal opinions. This is the issue with protestors today - nobody else cares, man. Leave people alone lol.
I totally agree, and it's basically theft that Apple simply doesn't have a standing offer to outbid anyone else for a security hole.
That said, we all get the same time on this earth. Spending your time helping various governments hurt or kill people fighting for democracy or similar is... a choice.
I don't think democracy is the panacea you seem to think it is, but that's another issue. Certainly, cracking software for governments and the police is no less legitimate an existence and occupation as, say, working for an NGO.
There is one non-technical countermeasure that Apple seems unwilling to try: Apple could totally de-legitimize the secondary access market if they established a legal process for access their phones. If only shady governments require exploits, selling access to exploits could be criminalized.
We have a word for this: a backdoor. It wouldn't de-legitimize the secondary access market. It would just delegitimize Apple itself to the same level. Apple seems to care about its reputation as the defender of privacy, regardless of how true it is in practice, and providing that mechanism destroys it completely.
It would not completely de-legitimize it. Maybe a government doesn't want anyone to know they are surveilling a suspect. But it definitely would reduce cash flow at commercial spyware companies, which could put some out of business.
Your opinion is that Apple should have just handed over Jamal Khashoggi‘s information to the Saudi Arabian agents who were trying to kill him, because then Saudi Arabia wouldn’t have been incentivized to hack his phone? I think you’ll find most people’s priorities differ from yours.
>It's worth doing (Apple patching) but a reminder that you are never safe from a determined adversary.
I hate these lines. Like yes NSA or Mossad could easily pwn you if they want. Canelo Alvarez could also easily beat your ass. Is he worth spending time to defend against also?
Memory Tagging Extension is an Arm architectural feature, not an Apple invention. Apple integrated and productised it, which is good engineering. But citing MTE as proof that Apple’s model is inherently superior misses the point. It doesn’t address the closed trust model or lack of independent system verification.
Your claim wasn't about inherent superiority or who invented what, your claim was "that Apple's approach is security by obscurity with a dollop of PR." The fact that they deployed MTE on a wide scale, along with many other security technologies, shows that not to be true.
MTE is an Arm architectural feature. Apple integrated it, fine. That’s engineering work. But the implementation in Apple silicon and the allocator integration are closed and non-auditable. We have blog posts and marketing language, not independently verifiable source or hardware transparency.
So yes, they deploy mitigations. That doesn’t negate the fact that the trust model is opaque.
Hardening a class of memory bugs is not the same thing as opening the platform to scrutiny. Users still cannot independently verify kernel integrity, inspect enforcement logic, or audit allocator behaviour. Disclosure and validation remain vendor-controlled.
You’re treating ‘we shipped a mitigation’ as proof against ‘the system is closed and PR-heavy.’ Those are different axes.
"Security by obscurity" does not mean "closed." It specifically means that obscurity is a critical part of the security. That is, if you ever let anyone actually see what was going on, the whole system would fall to pieces. That is not the case here.
If what you meant to say was "the system is closed and PR-heavy," I won't argue with that. But that's a very different statement.
Meanwhile Apple made a choice to leave iOS 18 vulnerable on the devices that receive updates to iOS 26. If you want security, be ready to sacrifice UI usability.
If you set Liquid Glass to the more opaque mode in settings I find iOS usability to be fine now, and some non-flashy changes such as moving search bars to the bottom are good UX improvements.
The real stinker with Liquid Glass has been macOS. You get a half-baked version of the design that barely even looks good and hurts usability.
It's a rug-pull going against the tradition of supporting the most recent 2 OS versions until the autumn refresh simply to technofascistly force users onto 26 with an artificially-created Hobson's false choice between security and usability. This is bullshit.
decade-old vulns like this are why the 'you're not interesting enough to target' argument falls apart. commercial spyware democratized nation-state capabilities - now any mediocre threat actor with budget can buy into these exploits. the Pegasus stuff proved that pretty clearly. and yeah memory safety helps but the transition is slow - you've got this massive C/C++ codebase in iOS that's been accumulating bugs for 15+ years, and rewriting it all in Swift or safe-C is a multi-decade project. meanwhile every line of legacy code is a ticking time bomb. honestly think the bigger issue is detection - if you can't tell you've been pwned, memory safety doesn't matter much.
yeah that's a good point about biome files - I hadn't thought about how much attack surface that creates. honestly Apple's forensics story feels pretty half-baked compared to their security theater around app store review and sandboxing. like they're great at preventing obvious malware installs but once something gets through (or comes in via MDM/enterprise provisioning) the visibility just isn't there. idk if it's a philosophical thing or just not prioritized, but the gap between 'we detected something' and 'here's what it actually did' is way too big for a company that claims to care about security
I wonder what the internal conversations are like around memory safety at Apple right now. Do people feel comfortable enough with Swift's performance to replace key things like dyld and the OS? Are there specific asks in place for that to happen? Is Rust on the table? Or does C and C++ continue to dominate in these spaces?
That does universal copy and paste with my linux laptop? Airdrop with my android tablet?
I can copy something on my macbook and paste that on my iphone - nice feature. Or to my iPad. I’m a sucker for interconnected technology, no hassle with transferring data between my devices.
Sure there are alternatives, but none that provide such integration amongst diverse class of devices. That’s the true monopole they have - unfortunately.
KDEconnect over a VPN works extremely well for clipboard and file and notification sharing. You don't have to use KDE for it. It works with Linux and Android. It also doesn't require an account and accepting terms, so it is strictly superior.
So there is a definite alternative. Why doesn’t anyone start Orange Inc. to provide a one-stop hardware solution using Linux. I mean a place where phones, laptops, tablets are sold that are setup to work together?
Why doesn’t Ubuntu (for example) move into selling integrated hardware solutions based on Linux - for the consumer market.
Ironically this is a security focused thread. The solution here isn’t to switch to a Linux phone, a platform that has absolutely atrocious security, especially compared to even stock iOS/Android. The only alternative that actually increases privacy and security is GrapheneOS. If one doesn’t want to buy a Pixel in order to have it, they can wait and see what the new OEM that will support GOS will be later this year before deciding if it’s worth waiting for in 2027.
Like what the person you replied to said. Sandboxing on Linux phones is incredibly weak outside of non-Flatpak Chromium browsers. And even Flatpak itself is a pretty weak sandbox compared to iOS/Android sandboxing. Part of this stems from Android and iOS being developed as sandbox-first OSes, so this could be said for any desktop operating system really aside from ChromeOS.
Also sure you could avoid crapware from Meta, Google and the likes but you will still could be exposed to nefarious programs via things like supply chain attacks (i.e. npm), or the developer turning coat or not realizing their app has an exploit, etc.
Linux also lacks a thorough permissions system unlike iOS/Android and the even more granular GrapheneOS.
Linux phones lack verified boot meaning persistent malware is trivial on linux devices. There is no MTE/MIE on Linux phones and even Google themselves say like 70% of malware spawns from memory exploits[1].
Also linux only really has block level encryption, not file based encryption like iOS/Android. It would be trivial for LEO to access your device unless it was totally powered off and then the only protection is LUKS. Or really even if you lose your phone and someone was so inclined to they could just extract all the data if it was powered on but on the “lock screen,” as most if not all desktop (and I’d imagine linux phone) environments do not actually do any encryption or anything when the system is locked, it’s just a cosmetic lock for all intents and purposes.
It would maybe be possible to somewhat mitigate that with cryptomator or somehow using fscrypt since that’s what Android uses but I dont know
Also even for basic things like clipboard protection, even with Wayland there are ways around it so that an app can read anything from the clipboard (not usually done for nefarious means in my experience, but it’s possible — see an app like Vicinae’s clipboard history and clipboard-centric features running on Wayland).
There’s more but this is like a short overview.
This doesn’t even get into people preferring Firefox on Linux which is light years behind Chromium based browsers in terms of security.
While it’s not a huge issue on desktop depending on how you view it, I would imagine phones see way more of people’s private data than their computers do and so I think it’s more beneficial to have higher security here than give that up for Linux.
I think that's because they don't consider the apps in their app store to be malware despite doing things like starting a server on localhost to circumvent sandbox.
> Linux phones lack verified boot meaning persistent malware is trivial on linux devices.
Librem 5 has a 3FF Smart card reader. Also, it can be completely wiped and reinstalled, ensuring that your phone is cleaned whenever you suspect a compromise.
> Or really even if you lose your phone and someone was so inclined to they could just extract all the data if it was powered on but on the “lock screen,” as most if not all desktop
I never heard about such possibility. Could you provide some details or links on how this could be done? AI says it's not really possible without very sophisticated instruments.
> It would maybe be possible to somewhat mitigate that with cryptomator or somehow using fscrypt since that’s what Android uses but I dont know
Indeed, GNU/Linux phones can and probably will improve their security with time taking some things from Android.
> Also even for basic things like clipboard protection, even with Wayland there are ways around it so that an app can read anything from the clipboard
You can't just say this without any evidence.
> This doesn’t even get into people preferring Firefox on Linux which is light years behind Chromium based browsers in terms of security.
Unless you switch off JavaScript, which is what I do.
Whenever plugging a hole like this, the OS should kinda leave it “open” as a kind of honeypot and immediately show a warning to the user that some exploit was attempted. Granted, the malware will quickly adapt but you should at least give some users (like journalists or politicians) the insanely important information about them being targeted by some malicious group.
I don't know what "equally annoying" would be for a company and its customers, i.e. a fair compromise. But we need a law requiring companies open source their hardware within X days of end of life support.
And somehow make sure these are meaningful updates. Not feature parity with new hardware, but security parity when it can be provided by a software only update.
Otherwise a company in effect takes back the property, without compensation.
The battery has very little capacity now, so I'm planning on buying a new iPad air with the M chip. It's really a game changer in terms of performance and efficiency.
Well whatever the zero means, it can't be the number of days that the bug has been present, generally. It should be expected that most zero-days concern a bug with a non-zero previous lifespan.
“Zero day” has meant different things over the years, but for the last couple-ish decades it’s meant “the number of days that the vendor has had to fix them” AKA “newly-known”.
Old-timers, at this point, but I take your point. I guess, for that matter, the terms "social engineering" (as it relates to manipulating people into divulging secrets, etc) and "doxxing" both came from the same community, too. How bizarre. Terms that were bandied about by kids in text files became actual industry jargon (and, in the case of "doxxing", arguably mainstream).
Right, I think the use of "0-day" as "stolen, unreleased software by software pirates" predates the current use.
The other commenter is right, there's a lot of overlap in the communities. It's strange to me that I was in the "field" a good 20 years before I ever thought it would be a career opportunity. This is not a complaint by any means. :-)
It's pretty unbeliveable that a zero-day can sit here this long. If one can exist, the likeliehood of more existing at all times is non-trivial.
Whether it's a walled garden of iOS, or relative openneds of Android, I don't think either can police everythign on anyone's behalf.
I'm not sure how organizations can secure any device ios or android if they can't track and control the network layer, period out of it, and there are zero carveouts for the OS itself around network traffic visibility.
> how organizations can secure any device ios or android if they can't track and control the network layer, period out of it, and there are zero carveouts for the OS itself around network traffic visibility.
The closest I've seen is an on-device VPN like Lockdown Privacy , but it can't block Apple bypassing the VPN.
iOS is one problem, but it goes for every other
device/server/desktop/appliance that you use.
You can take a lot of precautions, and mitigate
some risk, and ensure that operations can continue
even if something bad happens¹,
but you cant ever "be safe".
¹
""
There are known knowns; there are things we know we know.
We also know there are known unknowns;
that is to say we know there are some things we do not know.
But there are also unknown unknowns—the ones we don't know we don't know
""
(Often attributed to Donald Rumsfeld, though he did not originate the concept.)
The exploit was always there, you just didn't know about it, but attackers might have. The only thing that changed is that you're now aware that there's a vulnerability.
This kind of mental model only works if you think of things as made huge shadowy blobs, not people.
dyld has one principal author, who would 100% quit and go to the press if he was told (by who?) to insert a back door. The whole org is composed of the same basic people as would be working on Linux or something. Are you imagining a mass of people in suits who learned how to do systems programming at the institute for evil?
Additionally, do you work in tech? You don’t think bugs appear organically? You don’t think creative exploitation of bugs is a thing?
This vastly overstates both the competence of spy agencies and of software engineers in general. When it comes to memory unsafe code, the potential for exploits is nearly infinite.
It was a complicated product that many people worked in order to develop and took advantage of many pre-existing vulnerabilities as well knowledge of complex and niche systems in order to work.
Yeah, Stuxnet was the absolute worst of the worst the depths of its development we will likely truly never know. The cost of its development we will never truly know. It was an extremely highly, hyper targeted, advanced digital weapon. Nation states wouldn't even use this type of warfare against pedophiles.
Stuxnet was discovered because a bug was accidently introduced during an update [0]. So I think it speaks more to how vulnerabilities and bugs do appear organically. If an insanely sophisticated program built under incredibly high security and secrecy standards can accidently push an update introducing a bug, then why wouldn't it happen to Apple?
Maybe sometimes? With how many bugs are normally found in very complex code, would a rational spy agency spend the money to add a few more? Doing so is its own type of black op, with plenty of ways to go wrong.
OTOH, how rational are spy agencies about such things?
To what? Write 100% bug free software? I don't think that's actually achievable, and expecting so is just setting yourself up for appointment. Apple does a better job than most other vendors except maybe GrapheneOS. Mainstream Android vendors are far worse. Here's Cellebrite Premium's support matrix from July 2024, for locked devices. iPhones are vulnerable after first unlock (AFU), but Androids are even worse. They can be hacked even if they have been shut down/rebooted.
The problem with that is it runs on a desktop, which means very little in the way of protection against physical attacks. You might be safe from Mossad trying to hack you from half way across the world, but you're not safe from someone doing an evil maid attack, or from seizing it and bruteforcing the FDE password (assuming you didn't set a 20 random character password).
This is a newly-discovered vulnerability (CVE-2026-20700, addressed along with CVE-2025-14174 and CVE-2025-43529).
Note that the description "an attacker with memory write capability may be able to execute arbitrary code" implies that this CVE is a step in a complex exploit chain. In other words, it's not a "grab a locked iPhone and bypass the passcode" vulnerability.
I may well be missing something, but this reads to me as code execution on user action, not lock bypass.
Like, you couldn’t get a locked phone that hadn’t already been compromised to do anything because it would be locked so you’d have no way to run the code that triggers the compromise.
Am I not interpreting things correctly?
[edit: ah, I guess “An attacker with memory write capability” might cover attackers with physical access to the device and external hardware attached to its circuit board that can write to the memory directly?]
Remember when Apple touted the security platform all-up and a short-time later we learned that an adversary could SMS you and pwn your phone without so much as a link to be clicked.
KSIMET: 2020, FORCEDENTRY: 2021, PWNYOURHOME, FINDMYPWN: 2022, BLASTPASS: 2023
Each time NSO had the next chain ready prior to patch.
I recall working at a lab a decade ago where we were touting full end-to-end exploit chain on the same day that the target product was announcing full end-to-end encryption -- that we could bypass with a click.
It's worth doing (Apple patching) but a reminder that you are never safe from a determined adversary.
reply