Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I honestly don't think anything in that post 'demolishes' the criticism or even advances some sort of argument.

It's just a huge wall of text full of weird analogies which is quite typical for these 'rationalist' community posts.

People like Bostrom or Yudkowsky have one thing in common. They are not engineers and they stand to gain financially (in fact it is what pays their bills) to conjure up non-scientific pie in the sky scenarios about artificial intelligence.

In Bostrom's case this goes much further, he has given this treatment to anything including nuclear energy and related fields. Andrew Ng put it quite succinctly. Worrying about this stuff is like worrying about overpopulation on Mars, and there's maybe need for one or two people in the world to work on this.

I really wish we could stop giving so much room to this because it makes engineers as a community looks like a bunch of cultists.



> Worrying about this stuff is like worrying about overpopulation on Mars, and there's maybe need for one or two people in the world to work on this.

When something is shown to be doable in principle, it's often not clear how difficult it will be in practice.

In 1932, Ernest Rutherford thought nuclear energy would not be a viable source of energy, let alone weaponized. In 1933, Leo Szilard filed a patent on the concept of the neutron-induced nuclear chain reaction. At this point in time, nuclear fission was not yet known and actually making nuclear energy viable was a pipe dream.

As we all know, in 1945, the first nuclear weapons were used. Until the day that the weapons were used, German physicists thought that nuclear weapons would not be used in the war because, while possible in principle, the actual construction of a working device would require a herculean effort that no nation would expend in time for the war.

The German physicists weren't too far off in estimating how difficult nuclear weapons were. They just failed to predict that the US would throw 130,000 people, including most of their top minds, at the problem for years.

Now, we have no idea how difficult superintelligence will be. But the possibility that we're a couple of breakthroughs and a Manhattan project away from superintelligence is real, and I want a hell of a lot more than one philosopher and an eccentric fanfic writer working on this.

EDIT: No offense to Yudkowsky. I thought the fanfic was fairly good and, more importantly, achieved its purpose.


So you agree there is room for them to work on this, yet you feel they are making engineers generally look like cultists?

Maybe you’re just being oversensitive. The hype wave on AI danger is completely over, and there’s nothing wrong with people studying the question if that’s their interest.


You know we've been here before, right? I mean, lighthill report, Ray Kurzeweil is a serial offender for over thirty years, the singularity is around the corner thing, outrageous claims for fMRI, self driving cars. Over hyped ibm Watson which now health professions are talking about misdiagnosis problems.

Sure. We have google image match and better colorisartion and some improvements in language processing, and good cancer detection on x-rays. These are huge. But hype is, alas, making engineering increments look like cult.


Ray Kurzweil was never part of AI danger-hype


No. That was my random anti AI bias coming out. Ranter gotta rant




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: