As @STRML said, the difference is that Amazon is using network attached (EBS) storage as the primary instance storage, instead of local SSD. This provides a ton of benefit to Amazon, and some benefit for the user as well: CoW backend allows for nearly instant snapshots and clones, multi server/rack redundancy with EC, ability to scale up with provisioned IOPS easily, etc.
The downside is that the access methods for blocks mean some operations are more computationally and bandwidth intensive, meaning you will get fewer IOPS and less sustained throughput without paying more money. In addition, there is always going to be a bit more latency when going over the network versus a SAS RAID card.
As with all things in life, it's a tradeoff. If you look at other large providers' (GCE, DO at least) network storage offering, you'll also see a significant performance regression from local SSD.
> CoW backend allows for nearly instant snapshots and clones
LOL => A 80GB EBS SSD snapshot takes more than 1 hour.
Subsequent snapshots are incremental and will be less slow.
> multi server/rack redundancy with EC
You can move drive manually after you stop an instance, if that's what you call redundancy.
> ability to scale up with provisioned IOPS
Nope. You need to unmount, clone the existing EBS volume to a new EBS volume with different specs, and mount the new volume. You can't change anything on the fly.
The last time we had to change a 1TB drive from a database, we tried it on a smaller volume... then we announced that it would be a 14-hours-of-downtime-maintenance-operations (if we do it this way) :D
It doesn't appear that Lightsail actually allows hooking up data volumes. This is a surprising regression considering AWS basically invented it, and a major downside compared to DO.
So network-attached storage for Lightsail is upside for AWS, but all downside for the customer.
And they're still the same price as the others (traffic more expensive). How comes that? Do they just have such a higher margin or are there zero economies of scale?
The downside is that the access methods for blocks mean some operations are more computationally and bandwidth intensive, meaning you will get fewer IOPS and less sustained throughput without paying more money. In addition, there is always going to be a bit more latency when going over the network versus a SAS RAID card.
As with all things in life, it's a tradeoff. If you look at other large providers' (GCE, DO at least) network storage offering, you'll also see a significant performance regression from local SSD.