Just what is AI news reporting? In their relatively brief existence as creators of news articles, artificial intelligence programs have made numerous errors, including miscalculating simple interest, misrepresenting the chronology of Star Wars movies, and generating sports stories that demonstrate a lack of genuine sports knowledge.
The most recent case of AI News Reporting causing embarrassment involves an obituary of a former NBA player, headlined as “Useless at 42”. This article, originally published on a lesser-known news site called Race Track but widely shared through MSN.com, seems to have been derived from a legitimate TMZ news story about the passing of Brandon Hunter.
Also Read: AI Waiter Serves Food Faster Than You Settle in a Restaurant
It appears to have been processed through a tool referred to as a “Spinner,” which attempts to conceal plagiarism by substituting certain words with synonyms.

Not All Synonyms Are Created Equally
Not all synonyms are suitable replacements in every context, and this can result in peculiar or inaccurate descriptions. In the case of Brandon Hunter’s obituary, the use of terms like “Participant” and “Performed” to describe an NBA player and the mention of “17 factors in a recreation” certainly appear strange and out of place.
The widely ridiculed belly flop incident, which circulated across social media, proved to be not just an embarrassment for MSN but also for its parent company, Microsoft, a prominent AI developer. This AI news reporting mishap highlighted the challenges and limitations of automated journalism as a whole.
Also Read: Analogue Chip Paves the Way for Sustainable AI
Indeed, the widely ridiculed belly flop incident, which gained traction across social media, was not only a source of embarrassment for MSN but also reflected poorly on its parent company, Microsoft, a renowned AI developer. This AI news reporting event served as a stark reminder of the difficulties and constraints inherent in automated journalism as a broader practice.

What Is AI Generated Content’s Limitations
Generative AI, while powerful, indeed has its share of bugs and limitations that would likely lead to a rookie reporter being dismissed. These limitations include:
- Difficulty Discerning Fact From Fiction: AI lacks the ability to inherently differentiate between factual information and falsehoods. As a result, it can inadvertently generate content that is inaccurate or misleading.
- Inability to Gather New Information: AI systems cannot independently reach out to experts and sources to collect fresh information. This hinders their effectiveness, particularly in covering breaking news stories that require up-to-the-minute updates.
- Challenges with Context and Cultural Nuance: AI often struggles to understand the context and cultural subtleties necessary for crafting appropriate news articles. It may inadvertently generate content that lacks cultural sensitivity or appropriate tone.
These limitations emphasise the essential role of human journalists and editors in the news industry. While AI can assist in content generation and automation, human oversight remains crucial to ensure accuracy, ethical reporting, and adherence to journalistic standards. The human touch is also crucial to ensure that inaccurate AI news reporting is kept to a minimum.
The AI-generated travel article recommending a visit to the Ottawa Food Bank highlights the potential pitfalls of relying solely on AI for content creation. The article’s suggestion to visit a food bank on an empty stomach was not only inappropriate but also insensitive.
Microsoft’s Stance on AI News Reporting
Microsoft’s response to such incidents has been to enhance its systems to identify and prevent AI news reporting from producing inaccurate information from being published. However, it’s not clear how these AI-generated pieces slipped past human gatekeepers, if they were involved in the process.
In some cases, AI news reporting content may be published without proper oversight, leading to embarrassing or inappropriate results. This underscores the importance of maintaining a balance between AI automation and human editorial control to ensure the quality, accuracy, and ethical standards of published content. It’s worth noting that the original publisher, Race Track, could not be reached for comment and appears to have been deleted, further highlighting the potential risks associated with relying solely on AI news reporting content.

AI news reporting prose can vary widely in quality, and at times, it can be painfully clunky and awkward. In one instance that described a high school football game as a “Close encounter of the athletic kind” or a team “Avoiding the brakes and shifting into victory gear,” clearly demonstrate the potential for AI-generated content to produce awkward and unintentionally humorous results.
These instances highlight the importance of refining and fine-tuning AI systems for specific tasks, as well as the need for human oversight to ensure that the generated content meets the standards of readability and coherence expected by readers. It’s not uncommon for AI news reporting content to require editing and revision to make it more natural and engaging for readers.
“As with any new technological advance, some glitches can occur,” Jay Allred, chief executive of Lede AI, conceded in a statement to The Washington Post.
The issues encountered in AI-written articles often stem from both human and machine factors. Many publishers of AI-generated news articles, including MSN, seem to have missed a crucial step in the journalistic process—namely, the double-checking and editing of copy before publication.
What’s in Store for the Future?
A significant factor contributing to these challenges may indeed be Microsoft’s decision in 2020 to lay off dozens of journalists responsible for maintaining MSN’s homepage and news pages. Human oversight and editorial control are vital in ensuring that AI-generated content meets the necessary quality, accuracy, and ethical standards. When these critical steps are omitted or reduced, it can result in the publication of content that lacks the polish and precision expected in journalism.
The successful integration of AI into journalism often requires a careful balance between automation and human intervention to produce reliable, engaging, and high-quality news articles.
Frequently Asked Questions
How Can AI Integrate Into Journalism?
The successful integration of AI into journalism often requires a careful balance between automation and human intervention to produce reliable, engaging, and high-quality news articles.
What Issues Can Arise From AI Generated Content?
The issues encountered in AI-written articles often stem from both human and machine factors. Many publishers of AI-generated news articles, including MSN, seem to have missed a crucial step in the journalistic process—namely, the double-checking and editing of copy before publication.
How Does AI Often Write Content?
It appears to have been processed through a tool referred to as a “Spinner,” which attempts to conceal plagiarism by substituting certain words with synonyms.
Author Profile

Latest entries
GAMING2024.06.12Top 4 Female Tekken 8 Fighters to Obliterate Your Opponents in Style!
NEWS2024.03.18Elon Musk’s SpaceX Ventures into National Security to Empower Spy Satellite Network for U.S.
GAMING2024.03.17PS Plus: 7 New Games for March and Beyond
GAMING2024.03.17Last Epoch Necromancer Builds: All You Need To Know About It