Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Unlikely. Unless we start making file systems different.


Spectators may want to archive this HN thread for posterity, because I suspect nacnud will have the last laugh. (Nacnud's comment is currently sitting at about -3 points.)

It's unlikely that future technological advancements will give malware authors more tools to work with? That seems unlikely, if history is any indication.

Imagine when you have 1TB of non-volatile storage which operates at speeds close to those we currently enjoy for DDR3 memory. Most average consumers won't use all of that space. It seems entirely plausible that a virus will write itself to some area that is never overwritten. It becomes a latent piece of malware, invisible to any filesystem, but activatable on demand by some other hard-to-detect malware. This two-step scheme would be difficult to detect since the second component does nothing except activate the first component (stored in nonvolatile storage) under some specified criteria.

Or, how about key generation or password input? Remember cperciva's post about how it was extremely difficult or even impossible to zero a memory buffer correctly? Now add in the idea that suddenly "everything is nonvolatile" and you get a perfect storm for security concerns: if a program segfaults or is terminated mid-operation, then suddenly any sensitive data it had spit out into memory might be capturable by other programs. Or, since it's nonvolatile, those secrets will persist "forever" (until they're overwritten, which may be a long, long time if storage is plentiful) and you may inadvertently leak secrets when you sell your memristor-backed storage. Whoever buys it can do some digging and uncover whatever was written into that buffer, if it's truly non-volatile storage.

So, while the future is exciting, underestimating the security implications probably isn't a good idea. We can develop new methodologies to combat the new security concerns, but new security concerns will almost certainly come to fruition. Of course, calling the concerns "new" is somewhat unfair, since new concerns are almost always very old concerns manifesting themselves in new domains, but new domains usually enable malware writers with new toolsets to exploit.


I downvoted nacnud because it's such a generic statement, you could post it to almost any thread. Net neutrality? "Think of the potential for viruses to spread!" iWatch? "Think of the viruses you could have on your watch!" New JS framework? "Think of the attacks it could enable!"

Yeah, of course, everything has security implications and some technologies even more so, but unless one explains them and states how they are specific to the technology being discussed (like you did, for example) then the comment doesn't really add any value.


Then you did not properly grok his comment in the first place.


Can you explain? Because I don't understand either.


A new form of persistent storage is an enabler, it may create entirely new forms of attack. Whatever you cook up using Javascript, your watch or (that one is really reaching) Net Neutrality is not of the same degree at all, those are just (very slight) variations on existing themes.

Writable storage using a centralized access method (file system) is hard enough to purge and keep clean, having a memory mapped persistent chunk of storage that is user process level writable creates an entirely new class of problems.

Getting rid of viruses from BIOS flash is hard enough, this expands that concept to all of userspace.

I sincerely hope the 'x' bit will be off for such memory.


I will archive this thread to save this:

  >  1TB of non-volatile storage [...] Most average consumers won't use all of that


I'm with you I got my first 1TB HDD last year thinking it would last a few years. Nearly full now. Why? Because with all that space available I've changed how I use disk space - why throw stuff away. Also that I've moved to a higher grade camera and moved to consuming video stored on HDD rather than from discs.

When we're creating fully immersive 4D environments in place of holiday snaps then I imagine we're going to be using quite a lot of memory waving through our holiday review with friends.


> Because with all that space available I've changed how I use disk space - why throw stuff away.

Funny, I'm the opposite. With all this bandwidth available, why keep stuff?


Because storage availability is under your control but bandwidth+remote storage availability is not. In other words, you could lose your remote stuff + access to it.


I could lose my local stuff too, by having a disk fail. I trust S3's reliability more than my local disk, so it doesn't make sense to keep things locally.


It's not just S3, it's that and your local ISP and the backbone it connects to. All the planets have to be aligned for your remote storage to be available.

Locally, if you're using RAID like everyone else, you just have to trust your local electricity supply and your ability not to rm -rf /.

Remote is fine for backups, though.


Aka http://en.wikipedia.org/wiki/Induced_demand

The 'enables new opportunities that were previous infeasible' component is more exciting. But generally, I think most of it gets swallowed up in more of the old.


By cutting out the part about speed you've grossly distorted the quote. Yes, people will hang on to lots of data. But for the vast majority of files there is absolutely no need to access them faster then, oh let's say 1gbps with 1ms latency.

I'm sure people will find something to do with it but I'm having a very hard time thinking of situations where you want massive amounts of RAM and also need to be very power conscious or wary of power loss. Certain types of database, perhaps? I wouldn't imagine HPC worries much about power loss. Video game textures don't need to be nonvolatile.

For most purposes you can keep shoving in DRAM and SSDs until you have enough space.


Seriously this. Don't worry, Windows 11 will certainly require 1TB to run MaterialAeroMetro 2022.

The only thing worth debating is whether the GUI design pendulum will have swung between Flat/glossy an even or odd number of times by then :p


Windows 11? There isn't going to be a Windows 11, the version after 10 will be Windows 20, or maybe just Windows Cloud.


They'll release Windows One the same time the new xBox is announced as "xBox Infinity".


Windows minimum system requirements haven't changed since Windows Vista in 2006, up to Windows 10 technical preview.


Minimum RAM requirements to be more precise. In fact, I think they made the latest versions of Windows take less RAM.


> a virus will write itself to some area that is never overwritten

A simple way to counter this is to tag the memory pages as free/used. If the memory controller reads a page as free, any reads from it can return zeroes without hitting the actual memory bus. Any piece of code that wants to hide will not be part of any active object and its page being marked as used should be considered an allocation error and be easily detectable.


Also let's not forget HP said it's possible that memristors will also perform computation, which melds RAM, HDD and CPU into one and makes for a potentially wilder environment for viruses.


If we keep to the implicitly-shared-by-default UMA/NUMA architecture we've been working with so far. I'd imagine a combined compute-memory-storage unit would be mostly hardware-isolated, with something resembling message-passing semantics for cross-core communication. Something more like FPGA cells, or Erlang processes. (With the likely property that you could have pools of these cells that act like plain memory, for other cells to share and manipulate using handles/segment descriptors/whatever.)

True, even under this model, you could probably have a virus running "resident" in a core without the OS even being aware of it. Basically, it's be like the MMU was a hypervisor and the OS was just a domU within it, unaware of viral "domains." But those viruses probably wouldn't be able to do much, except waste electricity, because they'd need to either get the IOMMU to allocate them some hardware to do IO, or get the OS (which has such hardware) to agree to do IO on its behalf. (Though now we can get into things like intermittent side-channel attacks, where you can mine bitcoins on one cell for minutes and then only have to get the OS to leak a single packet when you have a (rare) success, which is much simpler than getting a whole stable TCP channel or something.)




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: