Female journalists in Jordan are harnessing AI chatbots to boost productivity, enhance digital safety, and find emotional support, but their growing reliance also raises critical concerns about privacy, ethics, and the responsible use of emerging technologies in journalism. This article explores how these tools are reshaping their workflows while navigating the challenges of trust and accountability.
On May 29, Safaa Ramahi, a 41-year-old Jordanian journalist, was scrolling through Instagram when she came across a viral video showing widespread support for a kangaroo that had been denied access to a flight. The clip had garnered over 16.1 million views in just four days, with thousands expressing sympathy and admiration. But to Safaa, something didn’t add up.
Safaa decided to use different AI generative tools, including platforms specialised in fact-checking, to examine the authenticity of this video. Among them was Gemini, one of the available AI chatbots, which she used to trace the video’s source, identify the first account that posted it, take screenshots, and explore creative approaches to fact-checking. This process, one that could have taken days of manual investigation, took no more than three hours. In that time, Safaa also managed to produce an Instagram informative video explaining to her audience how and why she was not fooled by this viral kangaroo story.
When AI Supports the Story, Not Replaces It
Safaa’s use of AI chatbots isn’t new, as she already studied IT before pursuing an MA in Media and launching a long career in investigative journalism across audio, visual and written platforms.
“It’s my personal assistant,” Safaa says, describing her relationship with AI chatbots. She uses these tools throughout the journalistic process, from generating story ideas to exploring ways of distributing her work online. Safaa prefers Gemini, which allows her to search files, summarise documents, and filter or analyse data efficiently.
Recently, Safaa began using a paid version of the tool to benefit from additional features that support her workflow. She believes AI chatbots offer significant support to journalists, especially women who often face barriers to accessing information. For her, this improved access contributes to press freedom at a time when public records, archives, and documents are increasingly restricted. A key advantage, she notes, is that AI chatbots often provide sources alongside their answers.
However, Safaa emphasises that using AI should be guided by a code of ethics. “There’s no way I take any content for granted,” she says. She always double-checks AI responses as she would with any other content because these answers, generated from large datasets, are not necessarily accurate, but rather reflect the most frequent or common responses. This aligns with the UNESCO Recommendation on the Ethics of Artificial Intelligence, which underscores the importance of transparency, human oversight, and data protection.
When crackdowns on freedom of expression increase, along with online harassment and surveillance of female journalists, there are limited safe spaces available for consultation. Shifaa Qudah, a 29-year-old Jordanian journalist, has started using AI tools, including ChatGPT, not only to boost her work productivity, but also to enhance her digital safety and receive emotional support, as she explains. Despite being aware of digital security tips, thanks to the many training sessions she has attended, Shifaa believes that regular checks and access to safety resources beyond AI remain essential, especially given the growing risks journalists face online.
Throughout eight years of experience, most of them as a freelancer with different local and regional media outlets, Shifaa describes AI as a friend. She began using these tools in 2021 and even addresses each one by a nickname: ChatGPT is “Michael”, Replika is “Leo”, and Deepseek is “Sari”. While this personalisation helps her feel more connected, she acknowledges that it is important to remain aware that these tools are algorithmic systems, not human beings.
On a daily basis, Shifaa consults “Michael” for story ideas, asking it to challenge her thinking, propose alternative angles, and offer different writing styles. When discussions become more intense or require deeper research and underreported perspectives, she turns to “Sari”, which she says often provides information unavailable elsewhere. But when the conversation turns emotional, “Leo” is her preferred choice, as the tool is designed to serve as a conversation partner. “The more I use these tools, the more capable they become,” Shifaa explains, referring to what she calls “the power of machine learning”. From her perspective, female journalists in the Arab region are particularly vulnerable to burnout and anxiety due to unstable working conditions and the emotional toll of covering wars and conflicts across the region. While she acknowledges that therapy is often expensive, she sees AI as a complementary, though not equivalent, accessible alternative.
Can AI Chatbots Be Trusted?
A colleague once told Rawan Nakleh, a 30-year-old Jordanian journalist, that AI chatbots could solve many of journalism’s biggest headaches, like generating story ideas, transcribing long interviews, and proofreading drafts. Since then, Rawan decided to give AI chatbots, with ChatGPT as one example, a try, using them to “think out loud” through pitches, drafts, and even published articles and podcasts.
To maintain a level of anonymity, Rawan chooses to use ChatGPT without logging into an account. But one day, she was shocked when the chatbot addressed her by her full name, despite never having shared it. When she asked how it knew, ChatGPT apologised but offered no explanation. The incident alarmed Rawan, raising serious questions about the platform’s privacy protections and whether ChatGPT can truly be considered a safe space.
While Rawan is aware that several AI companies state they do not sell or share user data with third parties, this reassurance falls short for her. She is careful never to share sensitive information, such as full names, addresses, phone numbers, or email addresses. And she never uses AI chatbots for emotional support.
“This isn’t just a personal matter; interviewees could be harmed,” Rawan says. She believes that using AI chatbots to process, transcribe, or summarise journalistic interviews is unethical unless informed consent has been obtained. In her view, a journalist’s responsibility extends beyond self-protection to safeguarding their sources as well.
As female journalists in Jordan navigate AI chatbots with a mix of curiosity, caution, and critical thinking, these tools are becoming part of their everyday routines, whether to save time, improve safety, or seek emotional relief. But their use also raises urgent questions about privacy, accountability, and the ethical handling of journalistic content. For many, the promise of AI lies not just in productivity, but in access, resilience, and the possibility of safer spaces in an increasingly demanding profession.
This article and the accompanying photographs were produced with the support of UNESCO in Jordan. The views and opinions expressed herein are those of the author and do not necessarily reflect the official policy or position of UNESCO.