Chat, is this real? How to push back against AI misinformation

Acacia Carol, Staff Writer |
If you’re a news consumer in 2025, you’ve been belaboured about identifying, verifying, and pushing back against fake news, misinformation, and disinformation. From Facebook rants to press releases, no form of media should be treated as safe because human error is undeniable.
In a healthy news system, there are checks and balances to ensure that consistently getting the facts wrong is met with swift corrective action.
But what happens when the human element is removed? What role does artificial intelligence (AI) play in the conversation of misinformation and fact-checking, and what are these tools’ roles in creating a healthy, reliable, and even somewhat trustworthy news landscape?
The Pew Research Center released a poll in 2023 saying that 52 per cent of Americans feel “more concerned than excited” at the prospect of AI being integrated into their daily lives. It is hard to argue against that anxiety.
The University of Waterloo also released a study that concluded that only 61 per cent of participants could correctly identify an AI-generated portrait.
It can be scary to think that the problem could be staring us in the face and we might not even recognize it.
So, what exactly makes AI so hard to spot? Well, for one, AI’s learning pace makes it harder to develop sufficient strategies to fight against AI misinformation.
Take an article from McGill, which is only one year old, for example. The tools and strategies they recommend are solid, but arguably already outdated. While AI still has issues generating eyes, lighting, and hands, and still struggles with minor artifacting errors, the pace at which these models have grown and improved in just a year is startling.
It’s also flawed to hold onto the idea that AI is going away anytime soon—Mark Zuckerberg announced earlier this year that Meta plans to invest over $65 billion in AI projects. Jeff Bezos and Elon Musk are following suit.
So, what’s the solution? Are we doomed to live in a world of AI slop, fake movie trailers, and subpar voice imitations of Peter Griffin singing Hozier’s “Take Me to Church?”
Well, yes, and no. There are tools and ideas for fighting against this type of misinformation, but they are useless without consistent effort on the consumer’s end.
One such tool local to Calgarians is the MRUnderstanding Misinformation page. The site is a faculty-created tool that outlines different strategies for spotting misinformation by using the same tools that investigative journalists use in the field.
The ‘Sift’ method has news consumers (S)top, (I)nvestigate, (F)ind better coverage, and (T)race the claim to debunk dubious allege.
But also consider looking to another form of intelligence when you feel unsure about the validity of a piece of media. Open-Source Intelligence (OSINT) uses publicly available tech and databases, like Google Maps, to analyze the validity of a claim. This can be great if you want to investigate an issue, claim, or a story that doesn’t feel right.
The issue with these methods is that they work best when you have reasons to believe that what you are seeing hasn’t been significantly doctored and is being posted for benevolent reasons. What happens when a photo looks real enough at first glance?
This is where generative AI specifically becomes a tricky place for journalists and news consumers to navigate. Unfortunately, this is where news consumers must ultimately shoulder some responsibility for pushing against AI misinformation.
Consider searching laterally when looking at a claim. Who is publishing this article? When was their account created? What sort of replies does the post garner? Is the publisher forthcoming about where they got their information from and what steps have they taken to ensure validity?
In short, leave no stone unturned.
However, AI isn’t inherently problematic as seen in the ways it’s being used in medical technology by estimating if breast tissue contains tumours and if they are likely to grow into cancer.
AI and its applications also aid in creating better accessibility for disabled people in academia.
We must accept the good with the bad that comes from AI and its integration into everyday life. Musk recently fired thousands of federal employees hoping to replace their roles with AI—a decision that two U.S. judges overturned.
As more and more businesses integrate intelligence programs into their user systems and websites, there is a genuine risk for AI to become a tool that abuses workers for a cheaper bottom line.
Whether that’s being fired so that a learning model can take over your role, or creating misinformation to make it harder for everyday users to spot when they are being deceived, it’s clear that regulations and guidelines only have so long until they are entirely unable to protect their human counterparts.