Google AI Error Sparks Concern Over Journalism and Misinformation

A Google AI chatbot incorrectly identified a journalist as a child killer, raising alarm about how unchecked artificial intelligence may endanger professional journalism and public trust.
Updated on

Scrutiny of AI-generated misinformation has intensified following a major error by Google's new chatbot, which wrongly accused a newspaper designer of involvement in a historical child murder case. The mistake has deepened concern among media executives and government officials who argue that AI systems are now misinterpreting trusted news content without adequate oversight.

The error occurred soon after a parliamentary hearing in New South Wales, where an MP used legal privilege to name the true suspect in the decades-old disappearance of Cheryl Grimmer. Online readers, eager to uncover the suspect's identity, asked Google’s AI Mode for information. With limited direct answers online, the chatbot guessed and falsely linked the crime to a graphic designer whose name had appeared beside a visual element in the original news coverage. The AI drew incorrect conclusions from this context alone, spreading a serious falsehood.

This incident illustrates what experts refer to as an AI "hallucination" - a flaw in large language models where inaccurate information is generated from unrelated data. Although Google removed the misinformation and emphasised its use of high-quality sources, critics have been vocal about the absence of human editorial oversight and the lack of systemic safeguards. The event also highlights how AI can distort journalistic work, wrongly identifying contributors and misrepresenting content.

Australia’s leading media organisations are calling for stricter regulation in response, stressing that this incident shows the dangers of relying on unverified AI-generated news. There is also growing support for initiatives that protect and support journalism, including the News Media Assistance Program, which recently received $67.6 million in funding for quality reporting. Industry figures warn that without stronger safeguards, AI platforms may amplify misinformation and erode the journalism they are intended to support.

Globally, Google’s news platforms have been under similar scrutiny. Recent reports revealed that its Discover feature has delivered fake and misleading content disguised as legitimate news around the world. These incidents raise further questions about AI’s influence on public understanding. While Google works to improve its algorithms, critics argue that clearer standards, defined accountability and direct editorial involvement are urgently required to limit the reputational and legal risks posed by advanced AI tools.

Sources

Updated on

Our Daily Newsletter

Everything you need to know across Australian business, global and company news in a 2-minute read.