Secure Video Ep. 3: How Do I Know if This Video is Real?

Share on Social

The digital landscape is facing an unprecedented crisis as AI-generated images and videos become nearly indistinguishable from reality. With the rapid advancement of generative AI, fake videos, deepfakes, and unauthorized edits can convincingly impersonate leaders, alter key moments, and spread misinformation at scale. A fabricated clip of an executive announcement, a manipulated government briefing, or an edited corporate communication can quickly erode trust and create real-world consequences. In this environment, a fundamental question emerges: how can we actually tell if a video is real? And perhaps even more importantly, what does “real” mean in a world where almost anything can be synthetically generated?

In this episode of Secure Video, Vbrick CIO Terry Medhurst explores these questions and outlines a new approach to digital media authenticity. Traditional methods like fact-checking, AI detection tools, or simply labeling content as true or false are no longer enough. These approaches are reactive and ultimately part of a losing arms race. As generative AI becomes more advanced, detection methods struggle to keep up. At Vbrick, we refer to this traditional approach as the “Truth Model.” The Truth Model focuses on analyzing the media itself—using visual analysis, forensic techniques, and AI detection tools to determine whether something appears to be manipulated or misleading.

While these techniques are valuable and will remain an important part of the toolkit, they have inherent limitations. Sophisticated AI models can evade detection, and visual analysis can only provide probabilities rather than certainty. In other words, the Truth Model can help raise questions about content, but it cannot definitively prove authenticity.

That’s why Vbrick believes the future lies in what we call the “Trust Model.” Instead of trying to determine whether content looks real after it has already been created and distributed, the Trust Model focuses on verifying the source of the content from the start. If you can trust the origin of a piece of media—who created it, when it was created, and how it has been edited—then you can trust the content itself. Imagine a system where creators can publicly claim ownership of their content and verify its authenticity without doubt. In the episode, Terry outlines the key elements required to make this vision possible, including blockchain technology and Content Credentials that establish a transparent record of digital media provenance.

Vbrick’s Verified Authentic initiative is designed to bring trust back to digital media. Built on blockchain and based on trusted C2PA Content Credentials, Verified Authentic creates a tamper-proof ledger documenting who created a piece of content and how it has changed over time. If the Content Credentials don’t match the blockchain record, the discrepancy is immediately flagged—no guesswork required.

This shift from truth-based verification to trust-based authentication represents a fundamental transformation in how we validate digital media. Verified Authentic is designed to be transparent, open, and accessible, giving businesses, governments, and creators a powerful tool to combat manipulation and disinformation. The goal isn’t just to detect deepfakes—it’s to declare authenticity with confidence.

Published On: 03/10/26
Go to Top