Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

They didn't try to do that, though. They tried to organize a group of people who largely (although not exclusively) knew nothing about AI, or ethics, to rubber-stamp their actions and assuage the concerns of regulators & watchdogs. Ethics laundering, basically.


That's a really huge assumption, considering they never met (afaict), let alone deliberated anything or issued recommendations. Google sought input from outside their bubble and those inside the bubble immediately protested based on identity politics.

There are a lot more people outside the bubble than inside, and this action makes it seem like Google doesn't give a shit what they think. It's the sort of thing that makes heavy regulation appealing even for those who normally would oppose regulation on principle.

How does that benefit Google?


There was widespread protest from outside the bubble, too. You'll be unsurprised to learn the entire AI ethics space is being enthusiastically colonized by Thought Leaders all too happy to hold "discussions" in service of whatever their corporate funders desire. Oh sure there'll be some token pearl-clutching, but you can't stop progress (toward ubiquitous facial recognition) right?

Anyway, if you're interested in seeing all the ridiculous shenanigans these people try to pull I recommend following @farbandish on twitter.


I thought it was only 1 or 2 people out of 8 who were problematic; How is that "colonizing"? It also stills feels disingenuous to judge all 8 people's intent and position without them meeting a single time or making any deliberation.


> I thought it was only 1 or 2 people out of 8 who were problematic

I think ahelwer meant the AI ethics space in general, not just this panel.

As a researcher who has been working in ML and also software engineering ethics since long before "AI ethics" became a thing, I've definitely noticed a massive infusion of self-promoting "thought leaders" who have zero expertise in either of those things.

Most of those people are MBAs with little or no actual business experience and zero software knowledge (let alone ML or AI research experience). Their ethics background usually amounts to a few undergraduate philosophy courses that, ironically, failed to teach them the one thing that a philosophy course should teach: a modicum intellectual humility.

And yes, those folks tend to care mostly about their paychecks.


> There was widespread protest from outside the bubble, too

It seems implausible that anyone not in a bubble cares what happens to Google's 8-person no-actual-power ethics team. It could be composed of witches and warlocks who spend their days researching pentagrams and nobody should care. There are a lot of small, questionable teams in large companies.

It is only remotely newsworthy as evidence that Google culturally purges conservative voices. Even with that frame; it could just be a beat up. I'd define the bubble as people who were getting worked up over it; they are obviously culturally synchronized because there isn't an obvious rational link.


Actually I think you'll find quite a few people care about what the megacorps decide to do with AI and their approach to the ethics surrounding it, since it's currently a fantasy to hope for regulation by the US gov.


That is what someone should be protesting, isn't it?

Perhaps the easiest way to help the world avoid the potential problems created by the Google's work is not to work for Google?

These protesting employees, whatever the cause du jour, are all in a compromised position. They need/want their jobs at Google. They cannot effectively function as internal watchdogs and regulators. They are not representative of the wider affected population; we do not elect them.

Neither Google nor Facebook can be relied on to "regulate itself", whether it comes from top-down decision-making or in response to internal protests.


This should be the canary/coal mine for whether or not regular humans can have opinions that differ from the master class/billionaires. Even if you are a titan in the field of AI Ethics, google has no use for you if you disagree with the masters. Google will lose 2 unbelievable female scientists, who also have exhibited far more compassion and empathy then we can expect from googles AI in the future.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: