New Skills for the Next Generation of Journalists

2017-1-HU01-KA203-036038

AI is changing common practices, unprecedentedly disrupting the copyright and the truth

Over 50 000 people have already signed, in less than four weeks, an open letter asking for a six months ‘pause [in the] giant AI experiments’. The signatories include university professors, CEOs and co-founders of major IT companies, ethicists, scientists, musicians, artists, and, taken as a whole, members of the general public – all concerned with the risk of the disrupting developments of artificial intelligence (AI). The widely circulated document raises four fundamental questions: the first puts in focus massive misinformation and disinformation threats from AI, the second is questioning the absurdity of automating essentially human skilled tasks, like text writing, image producing, or arts, the thirds calls attention to the huge risk of replacing human minds with AI systems, while the fourth highlights the danger of humankind dramatically losing control.

we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders. Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.

The six months delay in AI development, the letter explains, should allow for a minimum amount of time to create and implement a series of safety protocols for `systems [that are] more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal.`

It is highly improbable to have common safety protocols for systems based on artificial intelligence in half a year. Let us consider, for example, the evolution of social platforms. In more than 25 years, we witnessed huge scandals involving the interference with democratic processes, like Cambridge Analytica, or public safety hazards, such as the anti-vax movement, and even terrible tragedies, such as live transmissions of massacres. Still, despite consistent regulatory initiatives and courts actions, despite industry self-regulatory movements and policies, there are still social platforms supporting `free speech`, used by extremist terrorists (see the third course of ‘Debunking disinformation’, with the title `Tracking viral disinformation online`.)

Nevertheless, in the case of AI, it might be the case that the future of regulation and self-regulation is already here. Content Authenticity Initiative, a coalition founded in 2019 around Adobe that now has more than 1 000 members, announced on April 3rd enhanced technical support for users and for content creators. This support, based on open-source tools, includes improved metadata on ‘AI-generated ingredients’, and ‘do not train’ commands for content creators that want to keep AI away from their audio-visual products. Transparent metadata has been a standard of the industry for some time now and this is merely an update. The ‘do not train’ command is, on the other hand, a technical innovation protecting copyright. The law allows copyright owners exclusive rights to control copies, derivates, distribution, performance, display, for a limited number of years. Copyright owners have also the right to allow public usage of their work. AI is learning from online content, and it is using online content to create new content, disregarding copyright. A ‘do not train’ command is a technical barrier the industry has to acknowledge and accept, in order to actually enforce the copyright law – and this remains to be seen.

Regarding regulation, legislative initiatives start to appear, as a response to viral deepfakes presenting Donald Trump being arrested (while he was not) or the Pope in a white winter jacket (which he did not wear). In addition, in both the UK and the USA image copyright holders are suing AI based companies for copyright claims, creating legal precedents that clarify the law, under these new AI created circumstances.

The rules, standards and safety protocols are already changing, as you are reading this text.

Two rules of debunking, overthrown by AI hallucinations

In debunking, understood as a `a type of fact-checking that targets incorrect and misleading claims and widely held opinions, relevant to a community`, past trial and error processes led to several stable procedures, as we explain in the NEWSREEL2 ‘Debunking disinformation’ course, here.

In general, the target of a misleading claim is not a reasonable source for a debunking exercise. A journalist explaining why claims about public figures, for example, are misleading and probably malicious, is a more credible source, as compared to public figures themselves or to their own communication staff. As Marie Richter, managing editor of NewsGuard Germany explained to us, it is almost useless for Bill Gates to defend himself, saying he had no ill intentions related to vaccination and COVID. The situation is different if journalists are covering the story by the book.

Another well-constructed procedure refers to stories to be chosen for verification and debunking and to the actual parts of a debunking piece. Journalists must consider carefully if a misinformation has the potential to go dangerously viral and should be debunked or not. Without the help of an otherwise well-meaning debunker, some pieces of misinformation, disinformation and mal-information will remain forever what they were in the first place – strange artifacts in some dark end of the internet. 

Still, the artificial intelligence hallucinations, that produce untrue results and invent sources, are changing debunking procedures justifiably. Brian Hood, a recently elected Australian mayor, discovered ChatGPT presented him to members of the public as someone who spent time in prison for bribery. His lawyers said Hood was involved in the bribery scandal, but as a whistle-blower, not as the guilty party, and he was never imprisoned. The Australian mayor might be the first person filing a defamation case against an AI company, Reuters notices. This is not the only case and others are fighting back, too. An American law professor was wrongly accused by ChatGPT of sexually harassing a student and went public. The British Guardian covered artificial intelligence fabricating Guardian articles, to support false claims.  This is not a problem limited to English texts. Romanian investigative journalists questioned AI about one of their subjects, to have ChatGPT returning invented data and sources – but not an abstract of their own investigation.

The AI hallucinations are indeed strange artefacts in some dark end of the internet. Some have the potential to go viral – as it was the case of images of Trump being arrested, a deepfake widely circulated on social media. Some will be created and destroyed in seconds.

But journalists and public figures alike were right to debunk their own cases and to spread the news about the misinformation generated by AI. Microsoft and Google propose AI as an alternative to search engines. AI generating text and image can be used in a variety of fields, from entertainment to news, from education to medicine. We are looking for reliable results generated by AI and these debunking exercises, done by interested parties, spreading the word about AI hallucinations, might be just the right pressure the industry needs to self-regulate substantially and urgently.

This article was written by Raluca-Nicoleta Radu and edited by Manuela Preoteasa (University of Bucharest).

Photo: Pixabay