Yes, I'm real: Your guide to spotting AI-generated videos
Yes, I'm real: Your guide to spotting AI-generated videos
Artificial intelligence videos are flooding social media feeds, and distinguishing them from real footage is getting trickier by the week.
CHICAGO - AI videos aren’t novelties anymore. They’re everywhere – in your feed, on the news, sandwiched between cat videos and political rants. And they’re realistic enough that people are starting to pause mid-scroll and wonder: wait, is this real?
That question came up in our own comment section after FOX Chicago posted a TikTok about the recent cold snap.
Some viewers joked that the speaker looked AI-generated. Others genuinely weren’t sure. The whole thing became a weird reminder: the line between real and synthetic video has gotten uncomfortably blurry.
This matters more than it might seem. AI video tools are advancing faster than the systems designed to flag or regulate them.
With election season ramping up and misinformation spreading like wildfire across social platforms, being able to spot a fake is becoming less of a specialized skill and more of a basic survival tactic online.
What to look for:
Hands are your first clue, according to detection experts and MIT Media Lab research. AI still can’t quite nail fingers. You’ll see extra digits, weird bends, hands that flicker in and out of existence. If someone gestures and their hands look wrong, trust that instinct.
Eyes and mouths give it away too. MIT researchers studying deepfake detection point out that lips often don’t sync perfectly with speech in AI videos.
Blinking also gets strange, appearing too frequent, too delayed, or strangely mechanical. Humans blink without thinking about it. AI thinks too hard about it, and it shows.
Don’t ignore the background. AI-generated scenes look smooth at first, but zoom in and things get warped. Door frames bend. Objects blur for no reason. Details shift slightly between frames in ways that shouldn’t happen in real footage.
And then there’s a gut feeling. A lot of AI videos shoot for perfection, which is exactly the problem. Skin looks airbrushed. Lighting feels too studio-clean for a casual setting. Cybersecurity experts say when something strikes you as robotic or unnaturally polished, that instinct is usually onto something.
Why this happens:
AI tools are cheap now, and they’re fast and available to anyone with a phone and five minutes.
You can generate convincing video without any technical skill. Meanwhile, social platforms reward engagement, not accuracy, and that’s a recipe for chaos.
Digital literacy researchers have noticed that comment sections are becoming early warning systems. When viewers start arguing about whether something’s real, it’s often a sign that AI is doing exactly what it was designed to do: blend in seamlessly.
Where the guardrails are:
Social media companies are testing labels and detection tools, but media forensics experts say those systems are inconsistent and easy to work around. Until stronger standards exist, most of the responsibility falls on us as viewers.
Government action is starting to materialize. President Donald Trump signed the TAKE IT DOWN Act in May 2025, the first federal law targeting AI-generated content. It criminalizes non-consensual intimate images and deepfakes, and requires platforms to create takedown processes by May 2026.
Forty-six states have passed their own deepfake laws as of early 2026, most focused on election integrity, intimate imagery, and disclosure requirements for political content. But Congress hasn’t tackled broader regulations around deepfakes in news and misinformation, so the legal landscape remains a patchwork.
For now, the best defense is pretty simple, according to researchers who study this stuff: slow down. Look closely. Pay attention to the details that feel slightly wrong.
In a feed designed to keep you scrolling at maximum speed, taking an extra second to question what you’re seeing might be the difference between staying informed and getting played.
The Source: The information in this article was reported by FOX Chicago's Terrence Lee.