Content generated by AI is blurring our realities and leading to many unintended consequences.
In 2020, Liam Porr, a college student, used GPT-3 to build a fake blog that reached the #1 spot on hacker news. Using the pseudo name “adolos,” he generated content for simple topics related to productivity and self-help and published them with little to no editing. Members of the community, unknowing that it is an AI-generated blog, upvoted it. Liam later confessed that this was part of an experiment to see if GPT-3 could pass off as a human writer. This was not the first time AI was used to replace a human writer; the LA times has been using bots to write news articles since 2014.
By 2025 or 2030, 90% of the content on the internet will be auto-generated, automatic generation will hugely increase the volume of content available.
— Nina Shick, Deep Fakes and the Infocalypse
Tools like DeepFaceLab allows anyone today to spin up a video of a political candidate and spread it online, Disrupt will enable you to mimic anyone’s voice and GPT lets you write fake news; together, these become ingredients to a fake news publishing machine. Now, a person with access to these tools can single-handedly disrupt the world’s reality — manipulate elections, spread false propaganda, fabricate fake porn videos, conjure conspiracy theories, etc.
Volatile democracy
Politicians fear deepfakes as they can be easily used to swing elections
In March 2022, a deepfake video of Volodymyr Zelensky appeared on Facebook and Youtube. In the video, he announced his surrender to Russia’s invasion and asked Ukrainian troops to lay down their weapons. The footage was, however, not convincing; his upper body was motionless and his tone was different. But for untrained eyes, this is enough to create confusion and panic. This was the first weaponized use of deepfakes during an armed conflict.
“Democracies appear to be gravely threatened by the speed at which disinformation can be created and spread via social media, where the incentive to share the most sensationalist content outweighs the incentive to perform the tiresome work of verification.”
Source: Politicians fear this like fire, The Guardian
Fake news has already been used to affect US elections in the past, and with the emergence of new generative-ai tools, it gets easier.
DeepFake porn
Revenge porn has been taken to a whole other level
In the intro article, I shared Rana’s story, an Indian investigative journalist who was attacked through a deepfake porn campaign by her adversaries. Similar to her story, there are multiple interviews of women online, who have come forward to share how deepfake tools have been used to target them and create defamatory videos. Revenge porn has been taken to a whole other level, where anyone today can use a picture of you to generate scandalous videos and post them online. The research company Sensity AI estimates that between 90% and 95% of all online deepfake videos are nonconsensual porn and around 90% feature women. Most of these tools have been banned, however, Govts and law agencies are struggling to catch up and curb their use.
Karen Hao from the MIT Technology Review has been reporting powerful stories on this topic:
Bringing the dead back to life
Chatbots and deepfakes mimicking your dead loved ones
Companies like Microsoft and Amazon are in a race to bring the dead back to life. Just like in the Black Mirror episode, companies are using the data of our passed-away loved ones to train chatbots. Here is a video from Amazon that shows a demo of Alexa summoning the voice of a kid’s late grandmother, to come and tell him a story:
Somnium Space, a VR company that builds virtual worlds, also offers a live forever mode. Through this, even after you die, people can continue interacting with your avatar, trained on your previously collected data.
Another example is the documentary Roadrunner which features the life of Anthony Bourdain, the renowned chef who passed away. The voice-over in the documentary was done by an AI model trained on Anthony’s voice; however, his widow and family members were not happy about this. This opens up a bigger conversation about who is in control of the data of dead people. Digital afterlife policies in most countries allow relatives to access it once they make a case. But it is still unclear whether big data companies can continue using that data to train AI; there are no laws around who owns the data if no one claims it.
Replacing Celebrities
No more human actors
Who owns Bruce Willis? The actor recently mapped a digital model of himself and shared it with a Russian ad agency to make the commercial above. Bruce Willis was diagnosed with Aphasia, a form of dementia that made him retire last year; however, through deepfakes, he can continue acting. Who’s ready for Die Hard 47?
Ironically, voice actors are helping train the AI that will soon replace them. This also means that Simpsons can continue running indefinitely thanks to AI. But this leads to a bigger question, once celebrities sign off the rights to use their models in media, does it point to a future where we no longer need human actors? Actors’ unions worldwide are scrambling to get on top of this question.
Another example is the ‘Lost Tapes of the 27 Club’, a musical project that utilizes AI to generate new tracks from musicians who struggled with mental health issues and died at the age of 27 — including Cobain, Jimi Hendrix, Jim Morrison and Amy Winehouse.