Daily Guardian UAEDaily Guardian UAE
  • Home
  • UAE
  • What’s On
  • Business
  • World
  • Entertainment
  • Lifestyle
  • Sports
  • Technology
  • Travel
  • Web Stories
  • More
    • Editor’s Picks
    • Press Release
What's On

نَبني للتطوير العقاري” تعلن عن بيع جميع وحدات مشروع “نَبني أفينيو 7” خلال أسابيع من انطلاق أعمال البناء

January 26, 2026

Waymo triggers another probe over its robotaxis passing stopped school buses

January 26, 2026

SupperClub partners with Visa to introduce guaranteed held tables for affluent cardholders

January 26, 2026

The case for “invisible” tech: why health tracking is going screenless

January 26, 2026

MERED Reveals ‘Human-Centred’ Trends Driving Abu Dhabi Property Investment in 2026

January 26, 2026
Facebook X (Twitter) Instagram
Finance Pro
Facebook X (Twitter) Instagram
Daily Guardian UAE
Subscribe
  • Home
  • UAE
  • What’s On
  • Business
  • World
  • Entertainment
  • Lifestyle
  • Sports
  • Technology
  • Travel
  • Web Stories
  • More
    • Editor’s Picks
    • Press Release
Daily Guardian UAEDaily Guardian UAE
Home » AI chatbots still struggle with news accuracy, study finds
Technology

AI chatbots still struggle with news accuracy, study finds

By dailyguardian.aeJanuary 14, 20263 Mins Read
Share
Facebook Twitter LinkedIn Pinterest Email

A month-long experiment has raised fresh concerns about the reliability of generative AI tools as sources of news, after Google’s Gemini chatbot was found fabricating entire news outlets and publishing false reports. The findings were first reported by The Conversation, which conducted the investigation.

The experiment was led by a journalism professor specialising in computer science, who tested seven generative AI systems over a four-week period. Each day, the tools were asked to list and summarise the five most important news events in Québec, rank them by importance, and provide direct article links as sources. Among the systems tested were Google’s Gemini, OpenAI’s ChatGPT, Claude, Copilot, Grok, DeepSeek, and Aria.

The most striking failure involved Gemini inventing a fictional news outlet – examplefictif.ca – and falsely reporting a school bus drivers’ strike in Québec in September 2025. In reality, the disruption was caused by the withdrawal of Lion Electric buses due to a technical issue. This was not an isolated case. Across 839 responses collected during the experiment, AI systems regularly cited imaginary sources, provided broken or incomplete URLs, or misrepresented real reporting.

The findings matter because a growing number of people are already using AI chatbots for news

According to the Reuters Institute Digital News Report, six per cent of Canadians relied on generative AI as a news source in 2024. When these tools hallucinate facts, distort reporting, or invent conclusions, they risk spreading misinformation – particularly when their responses are presented confidently and without clear disclaimers.

For users, the risks are practical and immediate. Only 37 per cent of responses included a complete and legitimate source URL. While summaries were fully accurate in less than half of the cases, many were only partially correct or subtly misleading. In some instances, AI tools added unsupported “generative conclusions,” claiming that stories had “reignited debates” or “highlighted tensions” that were never mentioned by human sources. These additions may sound insightful but can create narratives that simply do not exist.

News

Errors were not limited to fabrication

Some tools distorted real stories, such as misreporting the treatment of asylum seekers or incorrectly identifying winners of major sporting events. Others made basic factual mistakes in polling data or personal circumstances. Collectively, these issues suggest that generative AI still struggles to distinguish between summarising news and inventing context.

Looking ahead, the concerns raised by The Conversation align with a broader industry review. A recent report by 22 public service media organisations found that nearly half of AI-generated news answers contained significant issues, from sourcing problems to major inaccuracies. As AI tools become more integrated into search and daily information habits, the findings underscore a clear warning: when it comes to news, generative AI should be treated as a starting point at best – not a trusted source of record.

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

Keep Reading

Waymo triggers another probe over its robotaxis passing stopped school buses

The case for “invisible” tech: why health tracking is going screenless

Microsoft has released an emergency Windows 11 update to fix crashing apps

New AirTag, same price, better range, and improved finding for you

The rise of adaptive displays: How Lenovo is redefining productivity & play

Your iPhone’s Siri upgrade may be tied to iOS 26.4

If you use Google AI for symptoms, know it cites YouTube a lot

Your portable PS4 Slim dream just got a real-world build

The psychology of the re-check: What Claritycheck says about digital trust

Editors Picks

Waymo triggers another probe over its robotaxis passing stopped school buses

January 26, 2026

SupperClub partners with Visa to introduce guaranteed held tables for affluent cardholders

January 26, 2026

The case for “invisible” tech: why health tracking is going screenless

January 26, 2026

MERED Reveals ‘Human-Centred’ Trends Driving Abu Dhabi Property Investment in 2026

January 26, 2026

Subscribe to News

Get the latest UAE news and updates directly to your inbox.

Latest Posts

Microsoft has released an emergency Windows 11 update to fix crashing apps

January 26, 2026

Yango Reveals Key Ride-Hailing & Transport Trends Across the UAE and Oman

January 26, 2026

New AirTag, same price, better range, and improved finding for you

January 26, 2026
Facebook X (Twitter) Pinterest TikTok Instagram
© 2026 Daily Guardian UAE. All Rights Reserved.
  • Privacy Policy
  • Terms
  • Advertise
  • Contact

Type above and press Enter to search. Press Esc to cancel.