DEEPFAKE BATTLES

This AI startup says it's the solution to deepfakes

Steg.AI co-founder Eric Wengrowski explains how his company works with governments and companies to authenticate online content

We may earn a commission from links on this page.
Stegi.AI CEO and co-founder Eric Wengrowski.
Stegi.AI CEO and co-founder Eric Wengrowski.
Illustration: Vicky Leta, Photo: Courtesy of Steg.AI

This story is part of our new Chief Innovation Officer Forecast series with Gizmodo, a business report from the front lines of the future.

As generative AI technologies reach the hands of the public, it’s becoming more difficult to differentiate truth from fabrication.

AI deepfakes have made their way into the public sphere as they’re deployed by hackers, high schoolers, and even politicians. Teens have been arrested for generating deepfake nudes of their classmates. Florida Gov. Ron DeSantis’ now-defunct presidential campaign posted deepfakes of Donald Trump kissing former National Institute of Allergy and Infectious Diseases director Anthony Fauci on X last summer. A China-backed online group, “Spamoflauge,” spread videos of AI-generated newscasters reporting fake news ahead of Taiwan’s presidential election. Meanwhile, a group of researchers at Microsoft believes China will likely use AI to meddle in the U.S. presidential election.

To regain public trust in an era of unclear realities, one solution that’s been employed by major companies such as Meta and Google — as well as the Biden administration — is digital watermarking. Such watermarks embed online content with code that’s invisible to the human eye but can be detected by algorithms to tag it as original or AI-generated. While some say the approach is imperfect or that it doesn’t go far enough to fight deepfakes, it’s surely a start.

Eric Wengrowski is the co-founder and CEO of Steg.AI, a digital watermarking startup he founded in 2019 after completing his PhD in computer vision and watermarking technology. The company uses digital watermarks and deep learning to authenticate digital media. Wengrowski sat down with Quartz to talk about how he started StegAI and why he thinks digital watermarking is critical technology.

Advertisement

Quartz: StegAI started before Big Tech began aggressively developing AI technologies. What motivated you to get this idea off the ground?

Eric Wengrowski: I was doing my PhD when the original “deepfake” work started to come out in academia. And it’s really cool, interesting work. I mean, it was amazing science, you know, amazing engineering. And then obviously there was a recognition at the time that, okay, this is something that could have ethical and security impacts, and we should start thinking about that. Steg is really built from the ground up to address these issues of authenticity and trust in media using responsible AI. The mission has been around providing infrastructure for knowing what is trustworthy, authentic content online.

Our customers are generative AI companies who want to identify their content as deepfakes. They are photographers and camera manufacturers who want to identify their content as organic. They are also companies and governments who want to be able to say, very strongly, ‘Hey, this is the official content.’

Advertisement

How has the industry changed over time?

When Steg was first founded in 2019, any time that we told investors or whoever else that we’re a watermarking company, we got cockeyed looks, like, ‘Oh watermarking, wasn’t that a thing that happened in the 90s? Why are you doing that now?’ And since the proliferation of deepfakes, and since the White House has taken a strong stand on the need to bring watermarking in as a strategically important technology to address misinformation and deepfakes, things have changed and more [digital watermarking] companies have cropped up.

Advertisement

Can you talk about the dangers of deepfakes?

The authenticity of media has always been in question at times, especially when there’s political opportunity. I think the difference is that, back in the day, to create something that was synthetic and convincing, you needed to be really skillful at literal cut and paste. Or you needed the resources of a movie production studio. With the advent of tools like Photoshop, the proliferation of misinformation grew. I mean, I remember looking at photos of the North Korean military that had been photoshopped to make it look like their armies were bigger or that a certain leader was alive and well.

I think the difference here is that the cost to create very realistic but potentially misleading content has been dramatically lowered by gen AI and deepfakes, and the problem of distributing that content has been supercharged with social media. So it’s really a problem of scale.

Deepfakes have gotten so good — and they’re going to continue to get good — that seeing is no longer believing. You can’t just look at a picture or a video and reliably tell if it’s real or not. We need a way to handle this problem of trust at scale.

Why is watermarking the answer, in your opinion?

So you need a solution [to deepfakes] that works at scale, and watermarking with content credentials is that solution that works at scale. An AI-generated image with a watermark that identifies [its source] using a content credential [say whether it] came out of an algorithm and maybe other relevant information about who created it. It adds to the value of that content; you know that you can trust the origin.

Advertisement

Steg has been working with really great partner companies along the way. We were one of the early members of C2PA, which is an open standard [for identifying both original and AI-generated content]. [We realized] just coming up with the world’s best deepfake detector isn’t a real solution here. You want to provide a content credential that is going to look at that photo and inform people downstream what the origin is.