This week, YouTube rolled out automatic age verification in the US. Users who the company’s AI system determines may be under 18 are forced to provide their ID or a full biometric scan just to watch certain videos. YouTube is the latest major tech platform to preemptively normalize invasive identity verification for everyday online activity—and it’s a disaster.
This new identity verification system sets a dangerous precedent. It builds a surveillance infrastructure that normalizes the tracking of legal and previously anonymous content consumption—all under the guise of child safety.
On this week’s Free Speech Friday, I explore why the preemptive AI age and identity verification used by YouTube and other tech companies like Instagram and Roblox is so harmful and how it could change YouTube forever.
Is YouTube secretly applying an AI filter to Shorts without telling creators? I recently noticed my videos looked strange and smeary on YouTube compared to Instagram, almost like a cheap deep fake. In this video, I investigate what’s going on and why I believe it’s a massive problem for everyone on this platform.
After talking with Rick Beato and seeing discussions on Reddit about the same “oil painting” effect on videos from creators like Hank Green, it’s clear there is some kind of non-consensual AI upscaling being applied to our content. For me, this is a huge issue that threatens to erode the most important thing a creator has: the trust of their audience.