What factors should journalists take into account while discussing the use of AI in the media?
In Guatemala, news media have been reporting on Artificial Intelligence (AI) in a celebratory manner. In other countries, some initiatives have invited journalists to use famous products like ChatGPT. However, journalists should be cautious in riding the AI hype, as the purpose of journalism is not aligned with what the AI industry offers. In other words, journalists should be cautious about producing optimistic coverage of the AI industry and critically assess products such as generative AI, taking the costs into consideration.
Does AI Represent Public Interest?
Historically, companies have claimed their products serve the common good, but these claims often conceal underlying private interests. Media platforms offer a clear example. As media researchers José Van Dijck, Tomas Poell, and Martijn de Waal explain in The Platform Society, platforms like Meta, Amazon, Google, and Uber emphasise their societal contributions to obscure their profit-driven motives. The AI industry follows a similar pattern. Companies release AI products primarily to increase profits and outpace competitors, including public services. The primary goal of AI companies is selling products, not improving public welfare or enhancing journalism.
Furthermore, AI technologies are not as reliable as their creators suggest. For example, McDonald’s halted its AI chatbot programme due to its repeated failure to accurately take customer orders. If these systems struggle with something as basic as fast-food orders, why should we trust them to be reliable in news production? This is not an isolated issue; AI often replicates existing problems, such as perpetuating sexist biases. Despite the industry's claims, this technology lacks the human intelligence, making the term "Artificial Intelligence" misleading. The industry cultivates a mythological image of AI as possessing superhuman abilities to drive sales—not because the products are genuinely superior.
AI’s Dependence on Natural Resources and Data Extraction
Journalists should take two critical steps to distance themselves from the AI agenda. First, they need to assess the environmental cost of using AI technologies, which often depend on the extraction of natural resources. For instance, ChatGPT-3 consumes approximately 500 millilitres of water for every 10–15 responses. Given that the application has around 200 million active users per week, this water usage is significant and should raise concern. As Bloomberg reports, by 2034, global energy consumption from data centres is projected to match India’s current consumption. AI products come with substantial material costs, and journalists committed to ethical practices must take this into account. This may lead to the development of ethical guidelines for AI use in journalism, especially regarding its environmental impact.
AI also relies heavily on data extraction. Journalists should approach AI applications with scepticism, as these technologies often legitimise the unconsented appropriation of data by private companies. Naomi Klein describes the AI industry as "the largest and most consequential theft in human history." Major AI companies profit from appropriating existing human knowledge produced in the digital realm, including journalistic content, without permission. AI now competes with journalists as a source of information, despite being developed using the work of journalists themselves. Companies like OpenAI did not create the data used to train their models; they extracted existing work created by others. For journalists, using AI tools derived from this "theft" is ethically troubling, particularly because these tools were built on the backs of stolen journalistic content.
Journalists and the Coverage of AI
The second key action for journalists is to avoid uncritically celebrating AI "marvels.” For instance, several reports from Prensa Libre, a major newspaper in Guatemala, have portrayed AI as possessing superhuman capabilities, such as detecting human sentiments—a claim that scientific research has argued is not possible. This kind of reporting highlights the problem of repeating tech companies’ claims without critically examining the actual capabilities and limitations of AI. According to the Centre on Technology Policy at the University of North Carolina, 60% of news articles about AI in the UK focused on industry products, with 33% of sources coming directly from the AI industry. This demonstrates how news coverage often allows private companies to shape the narrative surrounding AI rather than subjecting their claims to the scrutiny they deserve.
Researchers Mercedes Bunz and Marco Braghieri analysed 365 articles about AI from media outlets like The Wall Street Journal, The Daily Telegraph, and The Guardian between 1980 and 2019. They found that the media often framed AI as surpassing human expertise, despite evidence that it frequently underperforms, as highlighted earlier with McDonald's example. These exaggerations mislead the public about the actual capabilities of AI. To counter this, journalists and news organisations should establish guidelines for critically covering the AI industry, equipping reporters with the tools to scrutinise products marketed as ultimate solutions.
Journalism and AI: Different Agendas
The purposes of the information provided by journalism and AI products are not the same. While AI products are often flawed and primarily designed for profit, journalism in general seeks to provide the public with information that is accurate and useful. Therefore, journalists should critically evaluate what incorporating AI into newsrooms entails, especially considering that major platforms like Meta, Microsoft, Amazon, and Google develop these technologies. Journalists should try to tackle this issue by asking questions such as: Is it ethical to use AI in journalism? Is its environmental impact justified? Journalists should scrutinise AI with the same rigour they apply to governments, developing guidelines that highlight AI’s failures, its heavy resource consumption and dependency, and its reliance on data extraction. Rather than acting as mere platforms for AI companies, journalists should ask fundamental questions like, "What data did these companies use to train their AI?" By doing so, they can help audiences adopt a more critical perspective on these technologies and push for solutions that serve the public interest, not just the profits of a few corporations.
The views expressed in this article are the author’s own and do not necessarily reflect Al Jazeera Journalism Review’s editorial stance.