Hacker Newsnew | past | comments | ask | show | jobs | submit | jiggawatts's commentslogin

Lack of focus from AMD management. See the sibling comment: https://news.ycombinator.com/item?id=47745611

They just don't care enough to compete.



Went to Spanish.


That explains why the original link is broken, the language selector on the whole site must be broken!

> Trust in society is failing

Something that I've observed happening throughout history is that in some sense "too much civilisation" can be a bad thing long-term.

I knew someone in the army talk about how some officers wouldn't survive the first week of a real war. Not because of enemy fire, but because given the opportunity, the men under their command would almost certainly take advantage of the "less civilised nature" of the battlefield to take out someone they despise enough to murder, but not quite enough to risk it in a civilian setting where the tolerance for unsanctioned lethal force is essentially zero.

Something similar happens outside of militaries too, where truly horrible human beings[1] can cynically utilise the enforced peace of civilized countries to do incredibly evil but legal things. The Sacklers come to mind as a prime example. They knowingly and deliberately sold highly addictive drugs marketed with brazen lies and killed about a hundred thousand Americans by some estimates. They are above the law and totally immune to all consequence, personal or otherwise. No violence will ever be done to them! Anyone that tries will be severely punished, because that upsets the "order" of civilised society where the rich and powerful can massacre millions, but the plebs can't ever lift a finger against even one of their cartoonishly evil oppressors without severe personal consequence.

"Conservatism consists of exactly one proposition, to wit: There must be in-groups whom the law protects but does not bind, alongside out-groups whom the law binds but does not protect." -- Francis M. Wilhoit [2]

Sociopaths loooove civilised societies! They can mercilessly exploit people while basking in the protection of the law. As long as what they're doing is technically legal, they can get away with almost any amount of evil acts. This does take a while to build up! Norms, expectations, and the like keep the worst of the worst initially at bay, but these things slowly erode as more and more sociopaths take greater and greater advantage. (Cough-Trump-Cough)

This, taken far enough, where the common people are stepped on hard enough by those they can't ever bring to justice can result in entire societies just... snapping in their rage. They just need the opportunity, a "push", or some enabling event. In the case of the "friendly fire incidents" taking out bad officers, its a war. In most societies it is starvation or total economic hopelessness. We all know what this leads to: the French revolution is the prime example, but many others exist throughout history.

The failure of the United States is that its reigns of power have been completely and utterly captured by the increasingly corrupt elite, and there is nothing the common people can do about it. Frustration is growing, slowly, but surely.

It's not quite at the boiling over point, not yet, and may take a century to get there, but given the direction things have been heading, it's just a matter of time until the people take their anger out in some direct manner.

Trump might have started the first pebble rolling by causing an oil shock. And gas shock. And fertilizer shock. I'm sure a lot of hungry, cold people who can't even get a job because the AIs have replaced them -- and used their cooking gas for energy -- will be perfectly fine with this and won't ever do anything about it! That would be uncivilized!

[1] Disclaimer: Sam Altman is no saint, but I don't think he's anywhere near the level that he'd deserve mob violence.

[2] At some level the people commenting here that it's shocking and horrifying that anything violent ever happens to a billionaire CEO are betraying their right-wing leanings. Conversely, the people arguing that the elite shouldn't be above personal repercussions for their actions are strongly left leaning.


That’s an unfair characterisation!

Azure engineers absolutely considered security.

They just chose other priorities: growth at any cost to catch up with AWS.


I also like how they waffled on about how winching them up to a helicopter was the fastest option, when they obviously could have shaved an hour off the recovery time by simply having them step out onto the waiting boats!

Having worked for various government agencies for a while I've learned to recognise the signs of the "We're following the procedure whether it makes sense or not, dammit!" attitude you get with large bureaucracies.


I wondered about that. Winching someone who can barely walk and is wearing a spacesuit into a helicopter over choppy water is safer and quicker than parking them on a motor boat and sailing back to the mothership?

What was the real reason? Tradition? Lack of imagination? Photo opportunities?

The rest was great tho.


To play devil's advocate against my own argument: The nearest ship was about 5 km away, which is a decently long boat ride. In choppy waters with a small boat that could be less than ideal for someone who may be injured, weak from an extended stay in microgravity, etc. I assume the plan -- written months or years before the landing -- also had to factor in the possibility that the ships wouldn't have been so close. They did mention several times that the landing was unusually accurate, so it is entirely possible that their pre-planned helicopter ride would have made a lot more sense if they were, say, 20+ km away instead. You don't want dozens of people improvising the procedure in the middle of choppy waters with bad comms, so the best thing to do is to just follow the plan, even if it looks a bit absurd on camera.

100%. Easy to criticize this but you have to remember these are the people that planned and executed a successful moon mission. Pretty sure they know what they are doing and have thought about things in more that just a passing way.

So someone who can barely walk is supposed to safely jump from a space capsule to a boat in the middle of the ocean?

Is 10 days enough to make walking difficult?

People get wobbly legs after spending a few days on a cruise ship at sea.

I would assume spending 10 days in zero G is orders of magnitude more chaotic for your motor skills.


“Stepping” from one vessel to another in the middle of the ocean is not like getting on your buddy’s sailboat at the marina even if you have your sea legs. Astronauts don’t even have their earth legs when they splash down; when they return from ISS they can’t even walk right away, though Artemis was a shorter duration mission than that.

"No X, no Y no Z. Just a ..."

15 commits on Day #1 starting from an stub/empty repo. 47K lines of code developed in under two weeks by one person.

Sigh... AI slop.


But it’s written in Rust!!! It’s great!

> sort LEGO bricks by colour and size

I just looked into this out of idle curiosity, after watching some guy build a LEGO sorting machine. (They work in a warehouse that sells used bricks for model builders.)

Interestingly, this is on the cusp of viability, but training the ML model would still be cost-prohibitive (for me). With $17M, it's within reach, but there's still the obvious mechanical hurdles: Kids don't disassemble their Lego, the conditions are "less than ideal", and even vibrating belts in a warehouse scenario have a lot of trouble keeping bricks separated for the camera to get a clear image.

Robot hands are nowhere near the point where they can reliably (or even unreliably!) take apart two arbitrary Lego bricks that are joined, let alone anything of even mild complexity. This is hard for most humans, and often requires the use of tools! See: https://www.lego.com/en-us/service/help-topics/article/lego-...

The machine vision part is... getting there! You could pull some clever tricks with modern hardware such as bright LED lights, multi-spectral or even hyper-spectral sensors, etc. The algorithms have improved a lot also. Early attempts could only recognise a few dozen distinct shapes, and the most recent models a few hundred, but they're about 2-3 years old, which means "stone ages".

A trick several Lego recognition model training runs used was to photo realistically render 3D models of bricks in random orientations and every possible color, which is far faster than manually labelling photos of real bricks.

These days you could use the NVIDIA Omniverse libraries to heavily accelerate and automate this.


They also have a 46 megapixel Nikon Z9 which they don't appear to have used for some odd reason...

I found one taken with Z9: https://images.nasa.gov/details/art002e009301

There are some very bright noise pixels on the dark area, which is different from the noise in similar photos taken with D5 (much darker and uniform).


A common error made with "pixel peeping" is to zoom to 1:1, which shows a smaller physical sensor area with higher megapixel cameras.

The trick is to zoom to the same percentage zoom and compare side-by-side.

I did spot a few "hot" pixels visible on the Moon, but those are easily fixed in post.


“Retired general criticises the Pentagon” is practically a trope.

As a meta activity, I like to run different codebases through the same bug-hunt prompt and compare the number found as a barometer of quality.

I was very impressed when the top three AIs all failed to find anything other than minor stylistic nitpicks in a huge blob of what to me looked like “spaghetti code” in LLVM.

Meanwhile at $dayjob the AI reviews all start with “This looks like someone’s failed attempt at…”


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: