Al Jazeera Journalism Review

Deceased Politician - depicts deepfakes of deceased politicians campaigning for political parties (Source: Vasanth TV, compiled by New York Times)
Deceased Politician - depicts deepfakes of deceased politicians campaigning for political parties (Source: Vasanth TV, compiled by New York Times)

How AI Synthesised Media Shapes Voter Perception: India's Case in Point

The recent Indian elections witnessed the unprecedented use of generative AI, leading to a surge in misinformation and deepfakes. Political parties leveraged AI to create digital avatars of deceased leaders, Bollywood actors

 

The world's most extensive democratic exercise concluded as India's incumbent ascended to reign for another term. Besides being the biggest democratic movement, this was India's first post-generative AI elections. And the impacts were profound as the truth evaporated and emulsified with deceit. The Indian voters were inundated by a deluge of misinformation and counterfeit media synthesised using AI. Politicians' doppelgangers, voice impersonations, deceptive editing and much more blitzkrieged social media. The integrity of democratic discourse was put to the test as voters grappled with distinguishing fact from fiction in an unprecedented onslaught of AI-enabled disinformation.

 

Manipulated facts and mendacity are long-standing issues in elections, but the rise of generative AI has introduced a new dimension of complexity and efficiency to the spread of misinformation. In India, this technological shift profoundly affected voter perception and electoral integrity. AI-generated misinformation, particularly deepfakes and manipulated media, has emerged as a significant challenge, exacerbating cognitive biases and complicating the democratic process. According to Adobe' "Future of Trust Study for India", an overwhelming majority of Indians, 86 per cent, harboured concerns that the spread of misinformation and deceptive deepfake content could significantly impact and undermine the integrity of future electoral processes in the country.

 

Misinformation Outbreak in India

In a bizarre turn of events, deceased politicians were seen soliciting votes for their political parties. During recent electoral campaigns, political parties — Dravida Munnetra Kazhagam (DMK) and All India Anna Dravida Munnetra Kazhagam (AIADMK)—leveraged AI to create digital avatars and voice reproductions of their deceased leaders, M. Karunanidhi and J. Jayalalithaa. These digital representations were used to deliver speeches and endorsements, playing on the emotional connections that voters had with these iconic figures. The use of such avatars raises ethical questions about consent and the manipulation of public sentiment.

 

Bollywood_Elections - a snapshot of an AI-generated video of Indian Politicians, including Modi performing Bollywood dance. (Source: @the_indian_deepfaker)
Bollywood Elections: a snapshot of an AI-generated video of Indian Politicians, including Modi, performing Bollywood dance (Source: @the_indian_deepfaker)

The misuse of AI extended far beyond mere impersonations of politicians. Viral videos surfaced featuring startlingly realistic AI-generated personas of prominent Bollywood actors Aamir Khan and Ranveer Singh. In these deepfakes, the digital doppelgangers unleashed scathing rebukes against Prime Minister Modi's tenure, accusing him of failing to fulfil key campaign promises and inadequately addressing the nation's pressing economic challenges over his two terms in office. Both actors have vehemently disavowed the fabricated videos and filed police complaints regarding the malicious use of their likenesses through AI synthesis. The dissemination of such deceptive content stoked further confusion and undermined the integrity of the electoral discourse, especially in a country like India, where Bollywood is more sentimental than mere entertainment.

 

AI also enabled parties to achieve hyper-personalization. Prime Minister Modi was seen personally addressing individual Indian voters by name. The precision targeting enabled by generative AI has allowed campaigners to create tailored avatar videos or audio calls in any of India's myriad regional languages and dialects, facilitating a more direct connection with diverse voter demographics. However, the lack of clear sourcing for these personalised communications has raised concerns about the potential for large-scale voter manipulation and erosion of trust in the electoral process.

 

Bollywood_Elections - a snapshot of an AI-generated video of Indian Politicians, including Modi performing Bollywood dance. (Source: @the_indian_deepfaker)
Bollywood Elections: a snapshot of an AI-generated video of Indian Politicians, including Modi, performing Bollywood dance (Source: @the_indian_deepfaker)

Another alarming attempt to manipulate voter perception came to light, involving a sophisticated AI-driven influence operation orchestrated by the Israeli political campaign management firm STOIC. Dubbed "Zero Zeno" by OpenAI, this covert operation highlights the growing threat of AI-generated misinformation in democratic processes globally. The operation was first uncovered by OpenAI, which identified a network of AI-generated misinformation targeting the Indian elections. STOIC's activities involved creating and disseminating content critical of the ruling Bharatiya Janata Party (BJP) while promoting the opposition Congress party. According to OpenAI's report, the campaign generated misleading comments and articles, which were distributed across various social media platforms such as Facebook, "X" formerly known as Twitter, and Instagram.

 

The revelation of the Zero Zeno campaign sparked significant political reactions in India. The BJP condemned the interference, highlighting the dangers of foreign influence on the country's democratic processes. The Union Minister of State for Electronics and IT, Rajeev Chandrasekhar, emphasised the need for thorough investigations into such activities to safeguard the integrity of elections. He resorted to X, stating, "It is absolutely clear and obvious that BJP was and is the target of influence operations, misinformation and foreign interference, being done by and/or on behalf of some Indian political parties. This is a very dangerous threat to our democracy. It is clear that vested interests in India and outside are clearly driving this and need to be deeply scrutinised, investigated and exposed.”

 

In addition to these grave and direct attempts at AI's employment, the social media of Indian users is saturated with deepfakes disguised as merriment. These deepfakes featured Indian Prime Minister Narendra Modi in Bollywood-style performances — vibrant dance routines and singing popular songs. They seem to have been used to humanise the leader and make him appear more relatable and charismatic to the electorate.

 

One of the key impacts of these deepfakes is their potential to influence younger voters. AI expert Muralikrishnan Chinnadurai noted that such content could make Modi seem like a more approachable and softer leader, which can resonate particularly with younger, more digitally savvy voters who might be more impressed by this novel use of technology.

 

Machinery behind Misinformation

The demand for AI-generated media was well met by the supply. The wide requirements and consumption of AI-enabled media led to the blossoming of a million-dollar industry in India. There are well-organised organisations that employ AI to yield political content at the request of political parties. This demand for AI content is so high that these organisations are swamped with contracts and have to often resort to external resources to meet deadlines.

 

“We have hired people who have previously worked in [the film] industry in VFX [or] CGI... for high-quality output and fast delivery,” Nayagam, who has delivered AI media to multiple political parties, told Rest of World. He further said, "Each candidate is willing to spend over a million rupees [$12,000] on using AI technology for their election campaigning.”

 

Cognitive Impact on Voter Perception

While this AI-manufactured information might seem trivial at first glance, its impacts are profound. Research depicts that one of the primary cognitive biases that AI-generated misinformation exploits is confirmation bias. Voters tend to favour information that confirms their pre-existing beliefs and attitudes. AI systems can generate tailored misinformation that aligns with individual biases, reinforcing false narratives and entrenching political polarisation.

 

The sheer volume of AI-generated misinformation contributes to cognitive overload. Voters are bombarded with information, making it challenging to process and evaluate the credibility of each piece. This overload can lead to heuristic processing, where individuals rely on cognitive shortcuts rather than critical analysis, increasing their susceptibility to misinformation.

 

AI-generated misinformation can undermine trust in legitimate news sources. When voters encounter conflicting information, their trust in reliable media diminishes. This erosion of trust complicates the task of distinguishing between credible and false information, further distorting voter perception.

 

Combating AI Misinformation

It is clear that the impact of AI-generated fake media on voters is paramount. This makes it imperative to establish methodologies to mitigate such malpractices and enable voters to segregate the truth from a heap of deceit quickly. And efforts are already in place. Meta, the parent company of Facebook and Instagram, has recognised the profound influence that social media can have on elections. Given the increasing sophistication of AI-generated content, Meta has implemented several strategies to curb misinformation.

 

In regard to the Indian elections, Meta stated, "We have made further investments in expanding our third-party fact-checking network in India ahead of the General Elections 2024. We have added Press Trust of India’s (PTI) dedicated fact-checking unit within the newswire’s editorial department to our already robust network of independent fact-checkers in the country. The partnership will enable PTI to identify, review and rate content as misinformation across Meta platforms."

Dancing Deepfake - is an AI-generated  Video of Modi and Mamata Banerjee imitating a viral  American rapper (Source: Rohan Pal, X, compiled by New York Times)
Dancing Deepfake is an AI-generated Video of Modi and Mamata Banerjee imitating a viral American rapper (Source: Rohan Pal, X, compiled by New York Times)

 

In an effort to combat the rising tide of AI-enabled disinformation, OpenAI, the company behind the language model ChatGPT, has developed a deepfake detector tool specifically designed for disinformation researchers. This tool is primarily built to detect images generated by OpenAI's own popular image generator, DALL-E. While the company claims that the detector can accurately identify 98.8 per cent of images created by its latest model, DALL-E 3, it acknowledges that the tool's capabilities are limited when it comes to detecting synthetic media generated by other prevalent AI models like Midjourney and Stability AI.

 

Although present-day deepfake detection systems are not sophisticated enough to surpass the rate of spread of AI misinformation, multifaceted efforts are in place. Several startups and businesses have emerged that are undertaking measures in this direction. Zohaib Ahmed, the founder and CEO of the AI firm Resemble AI, says, "By distributing and seamlessly integrating deepfake technologies into widely-used platforms, as seen in today's phone spam filters, we can protect more voters regardless of their own resources or knowledge in this area."

 

However, the role of the public cannot be stated enough. Empowering citizens to identify and report misinformation is crucial. Efforts to enhance digital literacy and critical thinking skills among the populace are fundamental to developing a penetrative vision through the AI haze.

 

Neil Sahota, a leading AI Advision to the UN says, "Educating the public about the tactics used in digital manipulation, how to identify credible information sources, and the importance of critical thinking is essential in building resilience against misinformation."

 

The ascent of generative AI has put the democracies of the globe to an uncharted test. While the Indian elections have concluded, the clamour created by AI has surely left a precedent for the world to contemplate. It is now crystal clear that the only custodian of the singular truth is an informed citizen.

More Articles

Cameroonian Journalists at the Center of Fighting Illegal Fishing

While the EU’s red card to Cameroon has undeniably tarnished its image, it has paradoxically unlocked the potential of Cameroonian journalists and ignited a movement poised to reshape the future. Through this shared struggle, journalists, scientists, conservationists, storytellers, and government officials have united, paving the way for a new era of ocean advocacy.

Shuimo Trust Dohyee
Shuimo Trust Dohyee Published on: 21 Aug, 2024
Daughters of Data: African Female Journalists Using Data to Reveal Hidden Truths

A growing network of African women journalists, data scientists, and tech experts is amplifying female voices and highlighting underreported stories across the continent by producing data-driven projects and leveraging digital technologies in storytelling.

Nalova Akua
Nalova Akua Published on: 23 Jul, 2024
This Indian fact-checking newsroom is at the forefront of the fight against disinformation on the war in Gaza

In the digital battleground of Gaza's war, a surge of disinformation, primarily from Indian Hindu nationalists, paints Palestinians negatively, fueled by Islamophobia and pro-Israeli sentiments; yet, Alt News emerges as a crucial counterforce, diligently fact-checking and debunking these misleading narratives, even in Arabic, amidst a sea of manipulated social media content.

Meer Faisal
Meer Faisal Published on: 5 Dec, 2023
When Journalism and Artificial Intelligence AI Come Face to Face

What does the future really hold for journalism in the age of artificial intelligence AI?

Amira
Amira Zahra Imouloudene Published on: 12 Oct, 2023
How to use data to report on earthquakes

Sifting through data sounds clinical, but journalists can use it to seek out the human element when reporting on natural disasters such as earthquakes

Arwa
Arwa Kooli Published on: 19 Sep, 2023
‘I had no idea how to report on this’ - local journalists tackling climate change stories

Local journalists are key to informing the public about the devastating dangers of climate change but, in India, a lack of knowledge, training and access to expert sources is holding them back

Saurabh Sharma
Saurabh Sharma Published on: 13 Sep, 2023
‘Don’t let someone else narrate your stories for you’ - travel journalists in the global south

THE LONG READ: Life as a travel journalist isn’t just for privileged Westerners ‘discovering’ quaint parts of south-east Asia and Africa

Anam Hussain
Anam Hussain Published on: 1 Sep, 2023
‘People need to stop blindly obeying the law’ - journalists fighting on the fringes in Vietnam

THE LONG READ: Imprisoned, exiled and forced to base themselves overseas, independent journalists in Vietnam are punished harshly if they publish the ‘wrong’ sort of content. Some, such as Luật Khoa tạp chí, are fighting back

headshot
AJR Correspondent Published on: 25 Aug, 2023
Ethics and safety in OSINT - can you believe what you see?

OSINT is increasingly important for journalists in a digital world. We take a look at ethics, safety on the internet and how to spot a ‘deepfake’

Sara
Sara Creta Published on: 15 Aug, 2023
‘Other journalists jeer at us’ – life for mobile journalists in Cameroon

Journalists in Cameroon are using their phones in innovative ways to report the news for many different types of media, but major news organisations have still not caught up

Akem
Akem Nkwain Published on: 1 Aug, 2023
Analysis: Could AI replace humans in journalism?

Recent advances in AI are mind-blowing. But good journalism requires certain skills which, for now at least, only humans can master

Mei Shigenobu مي شيغينوبو
Mei Shigenobu Published on: 17 Jul, 2023
Understanding the pitfalls of using artificial intelligence in the news room

We’ve all been amazed by new advances in AI for news rooms. But we must also focus on ensuring its ethical use. Here are some concerns to address

KA
Konstantinos Antonopoulos Published on: 10 Jul, 2023
AI in the newsroom - how to prompt ChatGPT effectively

Interested in using ChatGPT in your work as a journalist? Here’s how to do it more efficiently

KA
Konstantinos Antonopoulos Published on: 29 Jun, 2023
AI in the newsroom - how it could work

AI is now our colleague in the newsroom and is poised to become even more helpful as it gets smarter and we see more opportunities - we look at the potential uses and problems

KA
Konstantinos Antonopoulos Published on: 22 Jun, 2023
What is ChatGPT and why is it important for journalists?

AI is taking the world by storm. In the first of a series of articles about the latest developments, we explain what it's all about

KA
Konstantinos Antonopoulos Published on: 13 Jun, 2023
'Rebuilt memory by memory' - recreating a Palestinian village 75 years after the Nakba

REPORTER'S NOTEBOOK: How it took the collective memories of several generations, painstaking interviews and a determined search through tall grass and prickly plants to recreate a destroyed community

Amandas
Amandas Ong Published on: 4 Jun, 2023
How to analyse satellite imagery

When you have a story, but still need to tie up loose ends to answer where or when a particular event occurred, satellite imagery can point you in the right direction

Sara
Sara Creta Published on: 25 May, 2023
OSINT: Tracking ships, planes and weapons

Tracking ships and planes is an increasingly valuable technique in open-source investigations carried out by journalists. In part 4 of our special series, we examine how it works

Sara
Sara Creta Published on: 18 May, 2023
Planning and carrying out an open-source investigation

Part three of our special series of articles on using OSINT in journalism. This time, follow our four steps to completing an open-source investigation

Sara
Sara Creta Published on: 9 May, 2023
What is an open-source investigation?

In the second part of our special series on using open-source intelligence in journalism, we look at what constitutes and open-source investigation

Sara
Sara Creta Published on: 4 May, 2023
Using Open-Source Intelligence (OSINT) in Journalism

Where once journalists relied on sources for information - also known as ‘human intelligence’ (HUMINT) - they now increasingly rely on ‘open-source’ intelligence (OSINT) gathered from the internet, satellite imagery, corporate databases and much, much more

Phil
Phil Rees Published on: 12 Apr, 2023
Getting started on your data story

In the third and final part of our special series of articles on data journalism, we look at how to work as a team and get started on a data-driven story

Mohammed Haddad
Mohammed Haddad Published on: 30 Mar, 2023
How to produce data-based stories

Follow our four steps to successful data journalism - from the story idea through to publication. Part two of our special series

Mohammed Haddad
Mohammed Haddad Published on: 23 Mar, 2023
Understanding data journalism

Data journalism is about much more than just sorting through facts and figures. In the first part of our series, we look at what constitutes data-based storytelling

Mohammed Haddad
Mohammed Haddad Published on: 16 Mar, 2023