The European Union‘s executive body has asked tech platforms including Google, Facebook, YouTube and TikTok to detect photos, videos and text generated by artificial intelligence (AI) and clearly label them for users. It‘s part of the European Commission’s bid to crack down on disinformation, which EU officials warn has been thriving since the start of Russia’s war in Ukraine.
Now, Brussels fears generative AI technologies are creating even more fertile ground for the spread of fake news and phony information. “Advanced chatbots like ChatGPT are capable of creating complex, seemingly well-substantiated content and visuals in a manner of seconds,” European Commission Vice President Vera Jourova told reporters on June 5.
“Image generators can create authentic-looking pictures of events that never occurred,” Jourova said. “Voice generation software can imitate the voice of a person based on a sample of a few seconds.”
Warning of widespread Russian disinformation in Central and Eastern Europe, Jourova said machines did not have “any right” to freedom of speech. She has tasked the 44 signatories of the European Union‘s code of practice against disinformation with helping users better identify AI-generated content.
“The labelling should be done now – immediately,” she said.
There is no obligation for tech firms to comply with Brussels‘ latest request and no sanctions if they do not — because the code of practice is purely voluntary. Andrea Renda, a senior research fellow on digital economy with the Centre for European Policy Studies, thinks there may also be technical barriers.
“Nothing guarantees that they will be able to detect in real time that something is generated by AI,” he told DW. Renda thinks most firms will work on a “best-effort basis,” but said the result would likely be “far from 100%.”
But Jourova said she was reassured by Google CEO Sundar Pichai. “I asked him: Did you develop fast enough technology to detect the AI production and label it so that the people can see that they do not read the text produced by real people? And his answer was: Yes, but we are developing, we are improving the technologies further,” she said.
There is one glaring gap on the guest list of Brussels’ anti-disinformation club: Twitter. In May, the platform owned by billionaire Elon Musk withdrew from the EU code of practice. Jourova was not impressed: By opting out, she said, Twitter “chose confrontation.”
“Twitter has attracted a lot of attention, and its actions and compliance with EU law will be scrutinized vigorously and urgently,” Jourova told reporters on June 5. In August, major content moderation obligations on large online platforms including Twitter will kick in under new EU legislation.
The regulations, dubbed the Digital Services Act, will force companies to be more transparent about their algorithms, beef up processes to block the spread of harmful posts and ban targeted advertising based on sensitive data such as religion or sexual orientation.
Renda said the new rule book was “groundbreaking.” Companies would face fines of up to 6% of global annual turnover if found to be in violation of the new legislation, and can even be banned from operating in the European Union. That means that, though Twitter can dodge Brussels’ latest request to immediately flag AI-generated images or videos, the platform will be forced to fall in line with broader EU rules later this year.
Brussels is also brewing up other laws to regulate artificial intelligence, known as the AI Act. Under the plans, some uses of AI would be banned outright, such as “social scoring” and most facial recognition in public spaces. The proposals also foresee restricting AI in areas deemed “high risk,” including recruitment — where AI could lead to discrimination — or in public transport.
But those rules are still making their way through the lengthy EU legislative process and likely won’t kick in for at least two years. Not to mention the fact they were drafted before the latest boom in generative AI. Now, Brussels is racing to catch up. It‘s chasing several stopgap measures, including a new voluntary generative AI code of conduct and an “AI pact,” under which companies could opt in to respect future rules before they fully apply.
Catelijne Muller has been advising the European Union on AI legislation since 2018 and co-founded the research and policy group “ALLAI.” She told DW that, despite what looks like a scramble to keep pace, Brussels is well-placed to regulate.
“The latest developments that we hear so much about — ChatGPT and all the generative AI models and foundation models — these fall right in the middle of the legislative process for the AI Act. And they fall at a moment that legislators can still take them into account,” she said. Muller thinks the challenges thrown up by the latest AI tools are “not inherently new.”
“You still see the same problems with bias and discrimination, you still see the same problems with human oversight and agency, and with fundamental rights impacts,” she said. Muller thinks lawmakers need not “throw overboard what they had and write something completely new” — instead, they can work to incorporate developments into existing proposals.
This article first appeared on DW.