Deepfake Videos by John Dabell
Artificial Intelligence (AI) isn’t a technology of the future, it’s a technology of the present and it’s changing the world and how we see it and experience reality. It’s also transforming how we interact with services such as people now being able to get expert health advice using Amazon Alexa devices.
Artificially intelligent algorithms can now generate incredibly realistic faces as well as take real people and manipulate them to say things that are totally fake.
Deepfake is the ability of AI to fabricate apparently real footage of people. This animates a face from a still image, melds faces and can show real people doing and saying things they never did using text-to-speech machine learning algorithms that can literally put words in the mouth of whoever appears in a video. This is disinformation on steroids.
This unreal reality is disturbing and can spread misinformation, alter perceptions, create chaos, destroy trust and threaten democracy. If deepfakes and deep video portraits fool adults then they will fool children and they can cause real, concrete harm.
One of the most outrageous examples of a deepfake video and put on Instagram is that of Facebook’s Mark Zuckerberg apparently talking about how he has “total control of billions of people’s stolen data, all their secrets, their lives, their futures,” and how he owed it “all to Spectre.”
AI-produced videos are explosive because they could show politicians and public figures saying or doing something extremely outrageous and inflammatory and these could threaten international security and change world events.
Deepfakes are spreading and can cause irreparable damage in the wrong hands and this is deeply worrying because they can be easily created using open-source software (e.g. DeepFaceLab) and online tutorials.
Just as we have a responsibility to teach children about fauxtography, we also have to share with them the dangers of deepfake videos and how their views and opinions can be easily shaped and twisted.
They have to question everything and can’t believe in old sayings such as “the camera never lies” because the world around them is being manipulated and now doctored by deep-learning algorithms. Seeing is not believing and it’s not what it looks like.
How to keep safe from lies and protect them from propaganda is no easy task and as AI becomes ever more sophisticated there are no short-cuts or quick-fixes we share with children for spotting fake videos. It’s a cat and mouse game for even for the best technology brains. The first deepfakes were easy to spot but now it’s becoming almost impossible although journalists are being trained in how to detect them, especially in how simulated faces blink.
They can discuss the ethics of face swap technology and whether malicious synthetic media should be made illegal and how we can defuse disinformation. They can also debate whether they can also be legitimately used for art, satire, comedy, and entertainment.
You can share with children the work of WITNESS, a human rights organisation that shows how video and technology can be used to protect and defend rather than cause harm.
Deepfakes, cheapfakes and shallowfakes are part of mainstream culture and like it or not children will see them so we have to be pro-active in sharpening their critical thinking and digital literacy skills and help them to prepare, not panic and look at the world with critical eyes.