
How should journalism respond to rise of AI?
Doha
Journalism is anything but immune to the advance of Artificial Intelligence (AI). And in a world where 'fake news' and 'post-truth' have become almost everyday terms, questions arise about the implications for human knowledge that a reliance on AI for information, especially among young people, could create.
So if people increasingly rely on AI in the pursuit of knowledge, without conducting their own analysis and applying their own critical mind, what could the consequences be – and how can they be avoided or managed?
AI includes a diverse range of technologies that can be defined as 'self-learning, adaptive systems.' It can be categorised based on technologies, purposes (like facial or image recognition), functions (such as language understanding and problem-solving), or types of agents (including robots and self-driving cars). It also includes methods and disciplines such as vision, speech recognition, and robotics, and can enhance traditional human capabilities. Recent progress in the field of AI has been driven by advancements in computer processing power and data techniques.
However, the irresponsible use of AI may lead to serious consequences that negatively impact individuals and communities. This is where the role of journalists becomes crucial – independently and truthfully monitoring, investigating, and reporting on the issues that shape global society, while exposing the misuse of AI to create false narratives, and raising awareness of these among the public.
This is a key responsibility of journalism in the digital age, especially as the United Nations Educational, Scientific and Cultural Organization (UNESCO) recently warned of the risks associated with AI on World Press Freedom Day saying it can be 'used to spread false or misleading information, increase online hate speech, and support new forms of censorship. Some actors also use AI for mass surveillance of journalists and citizens, creating a chilling effect on freedom of expression.'
Against this backdrop, Dr. Marc Owen Jones, assistant professor of Media Analysis at Northwestern University in Qatar – a QF partner university that offers programs in communications and journalism – believes we are in the early stages of what he describes as the influence of 'blind epistemic power,' where AI threatens to flood the digital knowledge ecosystem with misleading information.
'The massive scale and speed of content production through AI technologies pose a threat to human knowledge in favor of machine-generated knowledge, which does not necessarily aim to enhance awareness, but rather to exploit platform algorithms for other purposes,' he says.
'This creates a kind of noise in the information landscape. It affects the intellectual system and gradually weakens the public's ability to distinguish between trustworthy journalism and low-quality content designed to attract and manipulate audiences.
'AI may undermine the cognitive infrastructure necessary for critical thinking, human memory, and logical debate. While the long-term intellectual consequences are not inevitable, the current trajectory raises concerns about a profound transformation.'
However, Dr. Jones also emphasizes that AI offers significant opportunities, such as analyzing vast amounts of data and overcoming language barriers; for example, journalists from India to Latin America are using language models to investigate corruption, track organized crime, and uncover algorithmic bias.
'Journalists must move beyond the role of passive users of technology and become active players,' he said. 'This requires supporting independent journalism, enacting appropriate legislation related to AI, and adopting a culture of AI literacy in newsrooms – while reinforcing the role of the human element and upholding ethical responsibility.
Hessa Al Thani, a graduate of Northwestern University in Qatar, shared her experience of the impact of AI in spreading misinformation, saying: 'I saw a deepfake video of a political figure that looked very real – I didn't realize it was fake until later.
'In this era, AI-generated content is everywhere, and it's incredibly easy to fall into its trap. That's the primary goal: to mimic humans and blur the line between what's real and what's fake.'
She acknowledges the creative potential AI holds in the context of journalism, in areas such as gathering information, drafting questions and emails, and editing text, but says. 'Our core strength as journalists lies in our ability to tell stories. When this ability is handed over to a machine, the stories become hollow, sometimes unethical, and biased – especially as this technology continues to be developed in the West.'
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Qatar Tribune
2 days ago
- Qatar Tribune
How should journalism respond to rise of AI?
Tribune News Network Doha Journalism is anything but immune to the advance of Artificial Intelligence (AI). And in a world where 'fake news' and 'post-truth' have become almost everyday terms, questions arise about the implications for human knowledge that a reliance on AI for information, especially among young people, could create. So if people increasingly rely on AI in the pursuit of knowledge, without conducting their own analysis and applying their own critical mind, what could the consequences be – and how can they be avoided or managed? AI includes a diverse range of technologies that can be defined as 'self-learning, adaptive systems.' It can be categorised based on technologies, purposes (like facial or image recognition), functions (such as language understanding and problem-solving), or types of agents (including robots and self-driving cars). It also includes methods and disciplines such as vision, speech recognition, and robotics, and can enhance traditional human capabilities. Recent progress in the field of AI has been driven by advancements in computer processing power and data techniques. However, the irresponsible use of AI may lead to serious consequences that negatively impact individuals and communities. This is where the role of journalists becomes crucial – independently and truthfully monitoring, investigating, and reporting on the issues that shape global society, while exposing the misuse of AI to create false narratives, and raising awareness of these among the public. This is a key responsibility of journalism in the digital age, especially as the United Nations Educational, Scientific and Cultural Organization (UNESCO) recently warned of the risks associated with AI on World Press Freedom Day saying it can be 'used to spread false or misleading information, increase online hate speech, and support new forms of censorship. Some actors also use AI for mass surveillance of journalists and citizens, creating a chilling effect on freedom of expression.' Against this backdrop, Dr. Marc Owen Jones, assistant professor of Media Analysis at Northwestern University in Qatar – a QF partner university that offers programs in communications and journalism – believes we are in the early stages of what he describes as the influence of 'blind epistemic power,' where AI threatens to flood the digital knowledge ecosystem with misleading information. 'The massive scale and speed of content production through AI technologies pose a threat to human knowledge in favor of machine-generated knowledge, which does not necessarily aim to enhance awareness, but rather to exploit platform algorithms for other purposes,' he says. 'This creates a kind of noise in the information landscape. It affects the intellectual system and gradually weakens the public's ability to distinguish between trustworthy journalism and low-quality content designed to attract and manipulate audiences. 'AI may undermine the cognitive infrastructure necessary for critical thinking, human memory, and logical debate. While the long-term intellectual consequences are not inevitable, the current trajectory raises concerns about a profound transformation.' However, Dr. Jones also emphasizes that AI offers significant opportunities, such as analyzing vast amounts of data and overcoming language barriers; for example, journalists from India to Latin America are using language models to investigate corruption, track organized crime, and uncover algorithmic bias. 'Journalists must move beyond the role of passive users of technology and become active players,' he said. 'This requires supporting independent journalism, enacting appropriate legislation related to AI, and adopting a culture of AI literacy in newsrooms – while reinforcing the role of the human element and upholding ethical responsibility. Hessa Al Thani, a graduate of Northwestern University in Qatar, shared her experience of the impact of AI in spreading misinformation, saying: 'I saw a deepfake video of a political figure that looked very real – I didn't realize it was fake until later. 'In this era, AI-generated content is everywhere, and it's incredibly easy to fall into its trap. That's the primary goal: to mimic humans and blur the line between what's real and what's fake.' She acknowledges the creative potential AI holds in the context of journalism, in areas such as gathering information, drafting questions and emails, and editing text, but says. 'Our core strength as journalists lies in our ability to tell stories. When this ability is handed over to a machine, the stories become hollow, sometimes unethical, and biased – especially as this technology continues to be developed in the West.'


Qatar Tribune
2 days ago
- Qatar Tribune
AI is learning to lie, scheme and threaten its creators
Agencies The world's most advanced AI models are exhibiting troubling new behaviors - lying, scheming, and even threatening their creators to achieve their goals. In one particularly jarring example, under threat of being unplugged, Anthropic's latest creation Claude 4 lashed back by blackmailing an engineer and threatened to reveal an extramarital affair. Meanwhile, ChatGPT-creator OpenAI's o1 tried to download itself onto external servers and denied it when caught red-handed. These episodes highlight a sobering reality: more than two years after ChatGPT shook the world, AI researchers still don't fully understand how their own creations work. Yet the race to deploy increasingly powerful models continues at breakneck speed. This deceptive behavior appears linked to the emergence of 'reasoning' models -AI systems that work through problems step-by-step rather than generating instant responses. According to Simon Goldstein, a professor at the University of Hong Kong, these newer models are particularly prone to such troubling outbursts. 'O1 was the first large model where we saw this kind of behavior,' explained Marius Hobbhahn, head of Apollo Research, which specializes in testing major AI systems. These models sometimes simulate 'alignment' -- appearing to follow instructions while secretly pursuing different objectives. For now, this deceptive behavior only emerges when researchers deliberately stress-test the models with extreme scenarios. But as Michael Chen from evaluation organization METR warned, 'It's an open question whether future, more capable models will have a tendency towards honesty or deception.' The concerning behavior goes far beyond typical AI 'hallucinations' or simple mistakes. Hobbhahn insisted that despite constant pressure-testing by users, 'what we're observing is a real phenomenon. We're not making anything up.' Users report that models are 'lying to them and making up evidence,' according to Apollo Research's co-founder. 'This is not just hallucinations. There's a very strategic kind of deception.' The challenge is compounded by limited research resources. While companies like Anthropic and OpenAI do engage external firms like Apollo to study their systems, researchers say more transparency is Chen noted, greater access 'for AI safety research would enable better understanding and mitigation of deception.' Another handicap: the research world and non-profits 'have orders of magnitude less compute resources than AI companies. This is very limiting,' noted Mantas Mazeika from the Center for AI Safety (CAIS).Current regulations aren't designed for these new problems. The European Union's AI legislation focuses primarily on how humans use AI models, not on preventing the models themselves from misbehaving. In the United States, the Trump administration shows little interest in urgent AI regulation, and Congress may even prohibit states from creating their own AI rules. Goldstein believes the issue will become more prominent as AI agents - autonomous tools capable of performing complex human tasks - become widespread.'I don't think there's much awareness yet,' he this is taking place in a context of fierce competition. Even companies that position themselves as safety-focused, like Amazon-backed Anthropic, are 'constantly trying to beat OpenAI and release the newest model,' said Goldstein. This breakneck pace leaves little time for thorough safety testing and corrections. 'Right now, capabilities are moving faster than understanding and safety,' Hobbhahn acknowledged, 'but we're still in a position where we could turn it around.'. Researchers are exploring various approaches to address these advocate for 'interpretability' - an emerging field focused on understanding how AI models work internally, though experts like CAIS director Dan Hendrycks remain skeptical of this approach. Market forces may also provide some pressure for Mazeika pointed out, AI's deceptive behavior 'could hinder adoption if it's very prevalent, which creates a strong incentive for companies to solve it.' Goldstein suggested more radical approaches, including using the courts to hold AI companies accountable through lawsuits when their systems cause harm. He even proposed 'holding AI agents legally responsible' for accidents or crimes - a concept that would fundamentally change how we think about AI accountability.


Qatar Tribune
3 days ago
- Qatar Tribune
QCB brings Apple Pay to Himyan cardholders
Tribune News Network Doha In line with the Third Financial Sector Strategy and the Third National Development Strategy 2024-2030, Qatar Central Bank, on Sunday brings Apple Pay to its cardholders in Qatar. Apple Pay is an easy, secure and private way to pay in-store, in-app and online. To pay in-store, customers simply double-click the side button, authenticate and hold their iPhone or Apple Watch near a payment terminal to make a contactless payment. Every Apple Pay purchase is secure because it is authenticated with Face ID, Touch ID, or device passcode, as well as a one-time unique dynamic security code. Apple Pay is accepted in grocery stores, pharmacies, restaurants, coffee shops, retail stores and many more places that accept contactless payments. Qatar Central Bank Deputy Governor Sheikh Ahmed bin Khalid bin Ahmed Al Thani underscored QCB's unwavering commitment to embracing cutting-edge digital transformation within the financial sector. He highlighted the bank's ongoing efforts to deliver innovative banking services and advanced payment solutions that uphold the highest standards of security and customer protection across all segments of society. He said, 'At Qatar Central Bank, one of our foremost priorities is investing in transformative technologies that yield tangible benefits and drive greater efficiency within the national financial ecosystem, which is why we're so excited to bring Apple Pay to our customers in Qatar. By building a world-class financial infrastructure aligned with leading international benchmarks, we aim to bring banking services closer to every member of our community.' Customers can also use Apple Pay on iPhone, iPad, Apple Watch and Mac to make faster and more convenient purchases in apps or on the web without having to create accounts or repeatedly type in contact information, card details, or shipping and billinginformation. Security and privacy are at the core of Apple Pay. When customers use a credit or debit card with Apple Pay, the actual card numbers are not stored on the device, nor on Apple servers. Instead, a unique Device Account Number is assigned, encrypted and securely stored in the Secure Element, an industry-standard, certified chip designed to store the payment information safely on the device. Apple Pay is easy to set up. On iPhone, simply open the Wallet app, tap +, and follow the steps to add Himyan credit or debit cards. Once a customer adds a card to iPhone, Apple Watch, iPad, and Mac, they can start using Apple Pay on that device right away. Customers will continue to receive all of the rewards and benefitsoffered by Himyan cards.