How can you be sure an image wasn’t Photoshopped? Make sure it was shot with Truepic. This startup makes a camera feature that shoots photos and adds a watermark URL leading to a copy of the image it saves, so viewers can compare them to ensure the version they’re seeing hasn’t been altered.
Now Truepic’s technology is getting its most important deployment yet as one way Reddit will verify that Ask Me Anything Q&As are being conducted live by the actual person advertised — oftentimes a celebrity. [Update: Though to be clear, there’s no Reddit-wide or corporate partnership here. Reddit’s independent R/iAMA subreddit moderators have opted to suggest people use Truepic.]
But beyond its utility for verifying AMAs, dating profiles and peer-to-peer e-commerce listings, Truepic is tackling its biggest challenge yet: identifying artificial intelligence-generated Deepfakes. These are where AI convincingly replaces the face of a person in a video with someone else’s. Right now the technology is being used to create fake pornography combining an adult film star’s body with an innocent celebrity’s face without their consent. But the big concern is that it could be used to impersonate politicians and make them appear to say or do things they haven’t.
The need for ways to weed out Deepfakes has attracted a new $8 million round for Truepic. The cash comes from untraditional startup investors, including Dowling Capital Partners, former Thomson Financial (which become Reuters) CEO Jeffrey Parker, Harvard Business school professor William Sahlman and more. The Series A brings Truepic to $10.5 million in funding.
“We started Truepic long before manipulated images impacted democratic elections across the globe, digital evidence of atrocities and human rights abuses were regularly undermined, or online identities were fabricated to advance political agendas — but now we fully recognize its impact on society,” says Truepic founder and COO Craig Stack. “The world needs the Truepic technology to help right the wrongs that have been created by the abuse of digital imagery.”
Here’s how Truepic works:
- Snap a photo in Truepic’s iOS and Android app, or an app that’s paid to embed its SDK in their own app
- Truepic verifies the image hasn’t been altered already, and watermarks it with a time stamp, geocode, URL and other metadata
- Truepic’s secure servers store a version of the photo, assigned with a six-digit code and its URL, plus a spot on an immutable blockchain
- Users can post their Truepic in apps to prove they’re not catfishing someone on a dating site, selling something broken on an e-commerce site, or elsewhere
- Viewers can visit the URL watermarked onto the photo to compare it to the vault-saved version to ensure it hasn’t been modified after the fact
For example, the r/iAMA Wiki recommends that AMA creators use the Truepic app to snap a photo of them holding a handwritten sign with their name and the date on it. “Truepic’s technology allows us to quickly and safely verify the identity and claims for some of our most eccentric guests,” says Reddit AMA moderator and Lynch LLP intellectual property attorney Brian Lynch. “Truepic is a perfect tool for the ever-evolving geography of privacy laws and social constructs across the internet.”
The abuses of image manipulation are evolving, too. Deepfakes could embarrass celebrities… or start a war. “We will be investing in offline image and video analysis and already have identified some subtle forensic techniques we can use to detect forgeries like deepfakes,” Truepic CEO Jeff McGregor tells me. “In particular, one can analyze hair, ears, reflectivity of eyes and other details that are nearly impossible to render true-to-life across the thousands of frames of a typical video. Identifying even a few frames that are fake is enough to declare a video fake.”
This will always be a cat and mouse game, but from newsrooms to video platforms, Truepic’s technology could keep content creators honest. The startup has also begun partnering with NGOs like the Syrian American Medical Society to help it deliver verified documentation of atrocities in the country’s conflict zone. The Human Rights Foundation also trained humanitarian leaders on how to use Truepic at the 2018 Freedom Forum in Oslo.
Throwing shade at Facebook, McGregor concludes that “The internet has quickly become a dumpster fire of disinformation. Fraudsters have taken full advantage of unsuspecting consumers and social platforms facilitate the swift spread of false narratives, leaving over 3.2 billion people on the internet to make self-determinations over what’s trustworthy vs. fake online… we intend to fix that by bringing a layer of trust back to the internet.”