AI journalism

By Emil Bjerg, journalist and editor 

AI is changing the media industry at a dizzying pace. Should journalists be concerned or excited about the new tools and knowledge opportunities it presents? Let’s take a look at the past, present, and future of AI in journalism. 

Robo-reporters: First wave of AI journalism 

The media industry is highly susceptible to change from new tools and technologies. A decade after people started using social media, by the mid-10s, more people consumed their news via social media than through any other type of media, with algorithmic curation defining their news intake. 

AI’s role in news and journalism is expanding, not just in curating news but also in its creation. AI – or robo-reporters – have been used in news and journalism for around ten years, especially in financial news and sports reporting. Bloomberg was one of the first outlets to experiment with AI, using their Bloomberg Cyborg to write financial news, giving them a competitive edge over their competitor, Reuters. As of 2019, around one-third of the content produced by Bloomberg is generated by AI. When news bureau AP brought in robo-reporters in 2019, they went from producing 300 quarterly articles on company earnings to 3,700. Today, AP’s AI automatically generates 40,000 articles per year. 

It’s not coincidental that AI excels at financial reporting. The global market trades 24/7 and involves an astonishing volume of data. The tireless, pattern-finding qualities of AI perfectly align with the constant, data-intensive nature of global finance. 

2023: Generative AI versus writers 

After the release of GPT 3.5, things sped up as generative AI became a household talking point. Several large media outlets – Buzzfeed, CNET, and the German Bild – have already attempted to replace journalists with AI. In Hollywood, scriptwriters are on strike, fearing to get replaced by AI. The first half of 2023 gave us a peek at what happens in the media industry when writers and generative AI collide rather than collaborate. 

A reason for writers, journalists, and content creators to stay hopeful? The attempts to replace human writers with generative AIs have, so far, for the most part, been less than fruitful. 

Issues with factuality and originality 

The tech media outlet CNET was an early experimenter with generative AI. The result was articles full of factual errors that somehow passed through editorial checks. The series of

factual errors made Hany Farid, a professor of computer science at the University of California at Berkeley, wonder “if the seemingly authoritative AI voice led to the editors lowering their guard and [be] less careful than they may have been with a human journalist’s writing.” 

Factual errors have been one issue. Another is the crafting of original arguments and perspectives on current events. The writer Kyle Chayka hired a company called Writer to train an AI to mimic his style. Trained on more than 150,000 of his words, the AI version of Chayka tended towards tropes and banalities and struggled to provide value in terms of originality. 

What does it all mean for the reader? 

Algorithms and AI change how we produce, curate, and intake news and journalism. This evolution poses new challenges to the reader. While established media houses will continue to fact-check what they publish carefully, more opportunistic sites, fuelled by hopes of making a quick buck on content produced by generative AI articles, are popping up around the web. NewsGuard, a company that tracks online misinformation, recently identified 49 sites “almost entirely written by artificial intelligence software.” A journalist from The Guardian who visited the AI-generated sites writes A tour of the sites, featuring fake facts and odd wording, left me wondering what was real.” 

The same challenges extend to social media. In the past months, malicious use of generative AI has created pictures of a bombed Pentagon and of Trump fighting with police officers. The broad accessibility of generative AI means that everyone can generate pictures like this in a matter of seconds – and share them even faster on social media. 

While radio, television, and newspapers have journalists and editors trained to evaluate news truthfulness, relevance, and importance, fact-checking on social media happens post-publishing – if it even happens. With the rise of generative AI, it has become even more important for the average media consumer to stay critical of news stories found on social media. Cautiousness before sharing stories and double-checking tabloid content on social media is more critical than ever. 

New powers of discovery, creation, and connection 

As AI continues to evolve, it is sure to enter new arenas of journalism and, in doing so, compete with journalists. Charlie Beckett, Director of the media think tank Polis at the London School of Economics, isn’t fearful of robot reporters overtaking the work of journalists. Speaking to the Goethe Institute, Beckett expresses his belief that artificial intelligence, machine learning, and data processing will endow journalists with “new powers of discovery, creation, and connection.” 

He’s backed by Lisa Beckett, Director of News Partnerships and AI News Lead at AP, who says: “It frees our journalists from routine tasks, to do higher-level work that uses their

creativity; allows us to create more content that serves new audiences more efficiently; and improves our ability to discover news.” 

So collaboration rather than competition: Besides using generative AI, what are some ways that journalists can collaborate with AI? 

Journalists are already using machine learning to navigate large data sets with programs like Quill by Narrative Science and Wordsmith by Automated Insights. Tools such as Trendsmap, News Tracer, made by Reuters, or CrowdTangle, made by Meta, are designed to help journalists stay on top of what’s happening by “alerting journalists to breaking news, viral stories, and anomalous data trends”. 

Quill and Wordsmith are advanced platforms that specialize in natural language generation. They analyze complex datasets and transform data entries into readable narratives. These tools are used in the aforementioned finance and sports reporting but also in fields like investigative journalism and elections coverage. In those contexts, they are used to assist journalists in analyzing large sets of data to uncover hidden patterns, trends, or irregularities. 

News Tracer uses machine learning to look into clusters of tweets that focus on similar events. Subsequently, the tool generates a ‘newsworthiness rating’. According to Reuters, their News Tracer gives them a headstart on 8 – 60 minutes while also helping them verify news. Using Tweets Trendsmap provides real-time tracking of trending hashtags and topics from anywhere in the world. It visualizes these trends on a map, allowing journalists to identify local or global trends. 

Awareness of AI bias 

As the role of AI pattern recognition is expected to grow in curating the stories we hear, it’s important to be aware of AI biases. Just like journalists have biases, AI isn’t objective, even if its authoritative representation can make it seem like it. 

The output of AI systems is never more diverse or inclusive than the data they are trained on. In using so-called “event detection systems” to recognize and curate news, it’s important to be aware of where the data comes from. Relying solely on event detection systems is also relying solely on communities who report events and phenomena on social media like X, formerly known as Twitter. That can have implications for communities that are less represented online and in the media. 

The future of journalism according to a generative AI 

Asking ChatGPT to co-produce articles about AI quickly became a trope after GPT’s mainstream release. The language model’s mix of data overview and creative forecasting makes it a good and fun sparring partner. We’ll let the imaginary abilities of GPT-4 have the last words about the future of AI in journalism:

GPT-4 envisions a future of content creation, where stories become interactive and told with readers as co-authors. It writes:AI-driven platforms can adapt narratives based on reader input, effectively making readers co-authors of the story. By merging narrative forms with AI, we create an immersive storytelling experience where the line between author and audience blurs.” 

“Another innovative use case for AI in journalism,” GPT-4 writes, “could involve utilizing AI’s ability for deep, semantic understanding and context-awareness. GPT-4 asks us to imagine a system that can provide individualized “context layers” for every news article you read: 

“For instance, when you’re reading an article about a major new breakthrough in artificial intelligence by a tech company, the AI could provide you, in real-time, with a tailored summary of the history of AI advancements, previous similar breakthroughs by other tech companies, and the specific implications this development could have on the broader tech industry. These context layers would be adapted to your previous knowledge and interests based on your reading history and preferences. So, if you’re a software engineer with a deep understanding of machine learning, the AI might skip the basic explanation and offer more in-depth technical context. In this way, complex stories become more relevant and personalized to each reader.” 

Further, the use of augmented reality (AR) might become a thing in journalism – at least, if we are to trust the predictive power of GPT-4. It imagines that “a feature on Mars exploration could let you walk on the red planet’s surface without leaving your home. By enabling readers to live the story rather than just read it, AI-augmented journalism can offer an unprecedented level of immersion and understanding.”