Do you trust AI to write the news? It already is - and not without issues
The Guardian has accused Microsoft of reputational damage, after it displayed one of its articles alongside an inappropriate AI-generated poll.
The Guardian has accused Microsoft of reputational damage, after it displayed one of its articles alongside an inappropriate AI-generated poll.
Businesses are increasingly using artificial intelligence (AI) to generate media content, including news, to engage their customers. Now, we鈥檙e even seeing AI used for the 鈥済amification鈥 of news 鈥 that is, to create interactivity associated with news content.
For better or worse, AI is changing the nature of news media. And we鈥檒l have to wise up if we want to protect the integrity of this institution.
Imagine you鈥檙e reading a tragic article about the sports coach at a prestigious Sydney school.
In a box to the right is a poll asking you to speculate about the cause of death. The poll is AI-generated. It鈥檚 designed to keep you engaged with the story, as this will make you more likely to respond to advertisements provided by the poll鈥檚 operator.
This scenario isn鈥檛 hypothetical. It was played out in on the death of Lilie James.
Under a licensing agreement, Microsoft on its news app and website Microsoft Start. The poll was based on the content of the article and displayed alongside it, but or control over it.
If the article had been about an upcoming sports fixture, a poll on the likely outcome would have been harmless. Yet this example shows how problematic it can be when AI starts to mingle with news pages, a product traditionally curated by experts.
The incident led to reasonable anger. In a letter to Microsoft president Brad Smith, Guardian Media Group chief executive Anna Bateson said it was 鈥渁n inappropriate use of genAI [generative AI]鈥, which caused 鈥渟ignificant reputational damage鈥 to The Guardian and the journalist who wrote the story.
Naturally, the poll was removed. But it raises the question: why did Microsoft let it happen in the first place?
The first part of the answer is that supplementary news products such as polls and quizzes readers, as by the Center for Media Engagement at the University of Texas has found.
Given how cheap it is to use AI for this purpose, it seems likely news businesses (and businesses displaying others鈥 news) will continue to do so.
The second part of the answer is there was no 鈥渉uman in the loop鈥, or limited human involvement, in the Microsoft incident.
The major providers of large language models 鈥 the models that underpin various AI programs 鈥 have a financial and reputational incentive to make sure their programs don鈥檛 cause harm. Open AI with its , Google with PaLM 2 (used in ), and Meta with its have all made significant efforts to ensure their models don鈥檛 generate harmful content.
They often do this through a process called 鈥渞einforcement learning鈥, where humans curate responses to questions that might lead to harm. But this doesn鈥檛 always prevent the models from producing inappropriate content.
It鈥檚 likely Microsoft was relying on the low-harm aspects of its AI, rather than considering how to minimise harm that may arise through the actual use of the model. The latter requires common sense 鈥 a trait that can鈥檛 be large language models.
Generative AI is becoming accessible and affordable. This makes it attractive to commercial news businesses, which have been reeling from . As such, we鈥檙e now seeing AI 鈥渨rite鈥 news stories, saving companies from having to pay journalist salaries.
In June, News Corp executive chair the company had a small team that produced about using AI.
Essentially, the team of four ensures the content makes sense and doesn鈥檛 include 鈥溾: false information made up by a model when it can鈥檛 predict a suitable response to an input.
While this news is likely to be accurate, the same tools can be used to generate potentially misleading content parading as news, and nearly indistinguishable from articles written by professional journalists.
Since April, a NewsGuard investigation of websites, written in several languages, that are mostly or entirely generated by AI to mimic real news sites. Some of these included harmful misinformation, such as the claim that US President Joe Biden .
It鈥檚 thought the sites, which were teeming with ads, were likely generated to get ad revenue.
Generally, many large language models have been limited by their underlying training data. For instance, models trained on data up to 2021 will not provide accurate 鈥渘ews鈥 about the world鈥檚 events in 2022.
However, this is changing, as models can now be fine-tuned to respond to particular sources. In recent months, the use of an AI framework called 鈥溾 has evolved to allow models to use very recent data.
With this method, it would certainly be possible to use licensed content from a small number of news wires to create a news website.
While this may be convenient from a business standpoint, it鈥檚 yet one more potential way that AI could push humans out of the loop in the process of news creation and dissemination.
An editorially curated news page is a valuable and well-thought-out product. Leaving AI to do this work could expose us to all kinds of misinformation and bias (especially without human oversight), or result in a lack of important localised coverage.
Australia鈥檚 News Media Bargaining Code was designed to 鈥渓evel the playing field鈥 between big tech and media businesses. Since the code came into effect, a secondary change is now flowing in from the use of generative AI.
Putting aside click-worthiness, there鈥檚 currently no comparison between the quality of news a journalist can produce and what AI can produce.
While generative AI could help augment the work of journalists, such as by helping them sort through large amounts of content, we have a lot to lose if we start to view it as a replacement.
, Associate professor of regulation and governance,
This article is republished from under a Creative Commons license. Read the .