They don't bother to mention it, but this is actually to comply with the the new EU AI act.
> Providers will also have to ensure that AI-generated content is identifiable. Besides, AI-generated text published with the purpose to inform the public on matters of public interest must be labelled as artificially generated. This also applies to audio and video content constituting deep fakes
Is anyone else worried about how naive this policy is?
The solution here is for important institutions to get onboard with the public key infrastructure, and start signing anything they want to certify as authentic.
The culture needs to shift from assuming video and pictures are real, to assuming they are made the easiest way possible. A signature means the signer wants you to know the content is theirs, nothing else.
It doesn't help to train people to live in a pretend world where fake content always has a warning sticker.
I see a lot of confusing authenticity with accuracy. Someone can sign the statement "Obama is white" but that doesn't make it a true statement. The use of PKI as part of showing provenance/chain of trust doesn't make any claims about the accuracy of what is signed. All it does is assert that a given identity signed something.
It's not about what is being signed, it's about who signed it and whether you trust that source. I want credible news outlets to start signing their content with a key I can verify as theirs. In that future all unsigned content is by definition fishy. PKI is the only way to implement trust in a digital realm.
> It's not about what is being signed, it's about who signed it
Yeah, that's what I said about PKI. I also said there is confusion between provenance of a statement and its accuracy. Just because it's signed doesn't mean anything about its accuracy, but those that confuse the two will think that just because it is signed, or because it was signed by a certain "trustworthy" party, that indicates accuracy. PKI does not establish the trustworthiness of the other party, it only gives you confidence in the identity of the party who signed something.
George Santos could sign his resume. We know it was signed by George Santos. And yet nothing in the resume could be considered accurate (or even a falsehood) purely because it is signed. That it was proven to be signed by George Santos via PKI is independent of the fact that George Santos is a known liar.
Why do you need a whole PKI for that, rather than just, say, a link to the news outlet's website where the content is hosted? People have already been doing that pretty much since the web was created.
PKI has been around for, what, 30 years? Image authentication is just not going to happen at this point, because everyone's got too used to post-processing and it's a massive hassle for something that ultimately doesn't matter because real people use other processes to determine whether things are true or not.
Example: a video shows a group of police beating up a man for a minor crime (say littering). The video is signed by Michael Smith (the random passerby who filmed it on his phone). The video is published to Instagram and shared widely.
How do you expect people to take the authenticity of this video?
This is about as realistic as the next generation of congress people ending up 40 years younger.
We literally have politicians talking about pouring acid on hardware and expect these same bumbleheads to keep their signing keys safe at the same time. The average person is far too technologically illiterate to do that. Next time you go to grandmas house you'll learn she traded her signing key for chocolate chip cookies.
I imagine it would be something handled pretty automatically for everyone.
If Apple wanted to sign every photo and document the iPhone they could probably make the whole user experience simple enough for most grandmas.
Some people will certainly give away their keys, just like bank accounts and social security numbers today, but those people probably aren't terribly concerned with proving the ownership of their online documents.
>I imagine it would be something handled pretty automatically for everyone.
Then your imagination fails you.
If it is automatic/easy, then you have the 'easy key' problem, such as the key is easy to steal or copy. For example is it based on your apple account? Then what occurs with an account is stolen? Is it based on a device, what happens when the device is stolen?
Who's doing the PKI? Is it going to be like https, but for individuals (this has never really worked at this scale and with revocation). Like most social media is posting content taken by randos on the internet.
When your account is stolen someone can create "official" documents in your name and impersonate you. There could be a system for invalidating your key after a certain date to help out with those situations.
For prominent people who actually have to worry about being impersonated they could provide their own keys.
The infrastructure could be managed by multiple groups or a singular one like the government. The point isn't to be a perfect system, it's to generate enough trust that what you're looking at is genuine and not a total fraud.
In a world where AI bots are generating fake information about everyone in the world, that kind of system could certainly be built and be useful.
> The culture needs to shift from assuming video and pictures are real, to assuming they are made the easiest way possible.
That sounds like a dystopia, but I guess we're going into that direction. I expect that a lot of fringe groups like flat-earthers, lizard people conspiracy, war in Ukraine is fake, will become way more mainstream.
Usually when a big corporation gleefully announces a change like this it's worth checking whether there's any regulations on that topic taking effect in the near future.
On a local level, I recall how various brands started making a big deal of replacing disposable plastic bags with canvas or paper alternatives "for the environment" just coincidentally a few months before disposable plastic bags were banned in the entire country.
Seems like this is sort of a manufactured argument. I mean, should every product everywhere have to cite every regulation it complies with? Your ibuprofen bottle doesn't bother to cite the FDA rules under which it was tested. Your car doesn't list the DOT as the reason it's got ABS brakes.
The EU made a rule. YouTube complied. That changes the user experience. They documented it.
+1 in France at least, food products must not suggest that mandatory properties like "preservative free" is unique. When they advertise this on the package, they must disclose it's per regulation. Source: https://www.economie.gouv.fr/particuliers/denrees-alimentair...
Doesn't seem that out of place for a blog post on the exact change they made to comply though.
I mean you'd expect a pharmaceutical company to mention which rules they comply with at some point, even if not on the actual product (though in the case of medicine, probably also on the actual product).
So you making good pay by enabling a scammer makes it totally okay for the scammer to operate? By extension of that logic, hitmen should no longer be persecuted provided they make good pay from it.
You'd think they're evil too if they let a bunch of middlemen and parasitic companies dictate how the software you invested untold sums and hours developing and marketing should work.
Sorry, I wasn't entirely clear that I was specifically responding to the GP comment referencing the EU AI act (as opposed to creating a new top-level comment responding to the original blog post and Google's specific policy) which pointed out:
> Besides, AI-generated text published with the purpose to inform the public on matters of public interest must be labelled as artificially generated. This also applies to audio and video content constituting deep fakes
Clearly "AI-generated text" doesn't apply to YouTube videos.
But, it is interesting that if you use an LLM to generate text and present that text to users, you need to inform them it was AI-generated (per the act). But if a real person reads it out, apparently you don't (per the policy)?
This seems like a weird distinction to me. Should the audience be informed if a series of words were LLM-generated or not? If so, why does it matter if they're delivered as text, or if they're read out?
> Providers will also have to ensure that AI-generated content is identifiable. Besides, AI-generated text published with the purpose to inform the public on matters of public interest must be labelled as artificially generated. This also applies to audio and video content constituting deep fakes
https://digital-strategy.ec.europa.eu/en/policies/regulatory....
Some discussion here: https://news.ycombinator.com/item?id=39746669