As artificial intelligence transforms newsrooms across South Asia, journalists grapple with the fine line between enhancement and dependency
In Dharmendra Rajpoot’s modest home office in Lakhimpur Kheri, the pre-dawn ritual begins not with coffee or newspapers, but with the quiet clatter of keyboard strokes as works with ChatGPT. For this self-taught reporter from a small north Indian town bordering Nepal, artificial intelligence has become the bridge between his Hindi-language expertise and the English-language journalism world that once seemed impossibly distant.
"I began my career in journalism in 2014 with a Hindi news channel, covering local stories that rarely made it to the national spotlight," says Rajpoot, who has successfully secured environmental reporting grants and published stories in English with AI assistance. "But the moment I stepped out on my own as a freelancer, I faced a major barrier: language."
Rajpoot's transformation from struggling with English applications to confidently pitching international publications mirrors a broader revolution sweeping through South Asian newsrooms. Large language models have democratized journalism in ways that traditional training programs never could, breaking down linguistic barriers and accelerating research processes. Yet this technological embrace raises uncomfortable questions about the soul of journalism itself.
“AI acts like a silent assistant. The research, the reporting, the interviews, the photos, that’s all mine. AI helps me organize my thoughts, sharpen my expression, and polish my drafts.”
The Great AI Divide
The debate over AI in journalism has created distinct camps across South Asian newsrooms. On one side are journalists like Rajpoot, who view AI as an equalizing force. On the other are veterans who worry about the erosion of fundamental reporting skills.
"AI acts like a silent assistant," Rajpoot explains. "The research, the reporting, the interviews, the photos—that's all mine. AI helps me organize my thoughts, sharpen my expression, and polish my drafts." His recent stories on tiger-human conflict in the Terai and climate-related crop damage demonstrate how AI can amplify rather than replace traditional journalism.
But this enthusiasm isn't universal. Shumaila Khan, Chief Correspondent at Nukta Pakistan and former BBC video journalist, represents the cautious camp. "As someone who has always worked in regional language journalism, transitioning into a space where AI-generated articles are considered publishable has been ethically complex for me," she admits.
The Productivity Revolution
“What reporters do is extremely valuable. We want to use AI to amplify what the reporters do. We don’t want to use AI to replace the reporters,” a quote from the Summit on AI, Ethics and Journalism – 2025
For those who have embraced AI tools, the productivity gains are undeniable. Rathindra, a Sri Lankan journalist, uses multiple LLMs as research assistants and editing tools. "LLMs are excellent research assistants that can save your hours of work, if your prompt them correctly," he notes. "Rather than me going through a 1500-word article to make spelling edits, I can get LLMs to do this in a minute."
This efficiency extends beyond basic editing. Journalists across the region report using AI for initial research, fact-checking background information, converting American English to British English for regional publications, and even generating story angles from scattered notes.
According to data collected between April and July 2023 by JournalismAI, more than 75% of media outlets use AI in some capacity either for news gathering, production or distribution. The survey included 105 news organizations from 46 different countries. About 73% of those surveyed said they believed AI applications are a resource for new opportunities.
One Radio Television Digital News Association article explains, “AI intersects with core journalism principles like accuracy, context, trust, and transparency. Carefully weigh all issues before integrating into your news organization.”
The Ethical Tightrope
However, the line between assistance and dependency has become increasingly blurred. Several journalists expressed concerns about colleagues who have become overly reliant on AI for tasks that were once considered core journalism skills.
“I still find it uncomfortable to let AI write entire stories under my byline,” says Shumailah Khan, chief correspondent at Nukta Pakistan. She explains that many people now feed basic inputs into large language models and ask them to produce a full story, a practice she finds troubling because it signals a lack of effort and creative commitment. Her unease echoes a wider anxiety about authenticity in the age of artificial intelligence.
The visual journalism sector faces unique challenges. Drawing from her time at the BBC, Khan recalls a common newsroom saying: “If there’s no image, it’s not a visual story; it’s radio.” While reputable news outlets typically don't use AI-generated images or videos, she notes that in some cases such visuals are creeping in, raising concerns. “AI-generated visuals blur the line between real and fabricated reporting,” she warns.
“There is no problem in using AI for grammar correction, translation or even to help structure your thoughts when you’re overwhelmed by data, but the moment you let it write your entire report or use it to create fake visuals or voices without disclosure; you’ve crossed a line.”
Regional Perspectives and Dependency Concern
The adoption patterns vary significantly across South Asian countries. In India, smaller news organizations and freelance journalists have embraced AI more readily than established English-language publications. Bangladeshi journalists report using AI primarily for translation and fact-checking, while Sri Lankan newsrooms have integrated AI into routine editing processes.
Pakistani journalists face unique challenges, with some organizations explicitly prohibiting AI use for original content while allowing it for research and editing. This patchwork of policies reflects the industry's struggle to establish consistent ethical guidelines.
Perhaps the most pressing concern among veteran journalists is the potential for skill atrophy. Several editors noted that younger journalists who rely heavily on AI tools struggle with fundamental writing and reporting skills when the technology isn't available.
"Initially, I hesitated to even open ChatGPT in the newsroom, afraid colleagues would think I was cheating," one journalist confided. This anxiety about perception highlights the stigma that still surrounds AI use in traditional newsrooms.
Another
“As journalists, let's not rely on AI to write for us. Let's do that ourselves. Let's preserve our voices,” the independent reporter added with a condition to remain anonymous.
Voices from the Newsroom
Karthik Madhavapeddi, Deputy Editor at IndiaSpend, a data based public policy news portal, emphasized the balance between using AI as a tool and maintaining journalistic integrity
“I think it is now beyond convenience. You can prompt some of the tools to act as your reader, editor or colleague and get you to think critically, ensure there are no blindspots in your reporting. So, AI can be incredibly helpful,” he said, adding, “Having said that, these tools cannot do your job for you. When you rely on them too much, it is clear to your reader that your basics are missing—especially for a niche like data-driven reporting. The mantra is still pretty much the same: trust, but verify. Most tools are still prone to hallucination and could give you conclusions based on incomplete and outdated information. You are responsible to your readers, not the tool.”
“Remember that your story is about people—AI cannot tell you about them. And find your voice,” Karthik said.
He further adds, “Don’t make things up, and don’t ask the tool to make things up for you—be it AI or something else. Not disclosing the use of AI has been the biggest red-flag I’ve seen yet. Given that the tools are evolving rapidly, policies if any should be dynamic. Newsrooms should disclose if and how they are using AI, and we’re seeing these disclaimers already on syndicated stories or interview transcripts. Especially in a country like India, where digital literacy is still not universal, it is important to explain to your readers what this means for them.”
Arvind Shukla, founder of a website Newspotli focused on agriculture in India—an area that often intersects with issues of climate change and gender—believes that drawing clear ethical lines is essential as AI tools become more integrated into journalism. “There is no problem in using AI for grammar correction, translation or even to help structure your thoughts when you’re overwhelmed by data,” Shukla said. “But the moment you let it write your entire report or use it to create fake visuals or voices without disclosure; you’ve crossed a line. In our kind of reporting—ground-up, people-first—you can’t replace the field with a prompt.” He adds that responsible AI use should complement, not replace, human judgment: “You still have to go to the farm, talk to the farmer, understand the story behind the numbers. AI can’t do that for you.”
The Path Forward
As AI tools become more sophisticated, South Asian newsrooms are developing more nuanced approaches to their use. The key, according to veteran journalists, lies in maintaining the human elements that define quality journalism while leveraging AI's efficiency gains.
"AI might assist with visual ideas or scripts, but it cannot replicate the grit, emotion, or trust that comes from physically being on the ground," Shumailah Khan observes. "Media organizations must continue to invest in field reporting and avoid over-reliance on AI visuals."
The transformation of journalists like Dharmendra Rajpoot demonstrates AI's potential to democratize journalism and break down barriers. However, the technology's power also demands careful consideration of its limits and appropriate applications.
The tools most commonly mentioned by South Asian journalists include:
- ChatGPT for writing assistance and research
- DeepSeek for technical analysis and data interpretation
- Notebook LLM for organizing complex information
- Claude for long-form editing and structural feedback
- Perplexity for finding right attributions
Note: Some of the people quoted in the article used LLM tools to clean, correct and translate their arguments.
About the author: The author is an award-winning self-taught freelance journalist based in India. He was a Chevening 2015 SAJP fellow.