Before this update, S3 provided "read-after-write" consistency for object creation at a unique path[0] with PUT and subsequent GET operations. Replacement wasn't consistent. So I think you could have worked out some kind of atomic pattern with S3 objects, using new keys for each state change.
The Etag is not a reliable hash of the file contents. It will be different if the file was uploaded in multiple parts like using the CLI than if it was moved as one operation like copying from one bucket to another. You can tell the difference because the Etag will have a “-“ in it if it was uploaded as a multipart upload.