People who believe in superintelligence present an interesting case, because many of them are freakishly smart. They can argue you into the ground. But are their arguments right, or is there just something about very smart minds that leaves them vulnerable to religious conversion about AI risk, and makes them particularly persuasive?
Think about this for a bit.
Think about your cat argument and then ask why Michael Bloomberg isn’t president.
So when rational arguments fail, it is a good idea to turn to an ad hominem attack instead?
Regarding Bloomberg, I do not see why we should compare a human being whose goals are not completely public with a future AGI with non-human morality and methods, and not subject to many tendencies and limitations humans are subject to.
Let me ask you (or anyone else, esp those who downvote) a question:
What would be a minimum demonstrated capability of an AI that starts to worry you?
The title of the essay is literally Superintelligence The Idea That Eats Smart People
I don’t think it’s ad hominem to point that out.
What would be a minimum demonstrated capability of an AI that starts to worry you?
I think the “AI” part of this is a distraction. I worry about systems that use weapons without human intervention for example. That concern applies no matter what your definition of AI is.
Well, the title itself is an ad hominem although most of the arguments therein are not, which implies the author thinks he needs to rely on other grounds to convince people.
Think about this for a bit.
Think about your cat argument and then ask why Michael Bloomberg isn’t president.