Deep Nostalgia, released by the company MyHeritage, is trending on social media, with users reanimating everyone from famous composers to dead relatives. The software is drawing mixed reactions, with some people delighted by the creations, and others finding them creepy. The technology shows just how easy it is to create videos of people doing things they actually haven’t done in real life. “Deepfake technology is getting more sophisticated and more dangerous,” Aaron Lawson, assistant director of SRI International’s Speech Technology and Research (STAR) Laboratory, said in an email interview. “This is partly due to the nature of artificial intelligence. Where ’traditional’ technology requires human time and energy to improve, AI can learn from itself. “But AI’s ability to develop itself is a double-edged sword,” Lawson continued. “If an AI is created to do something benevolent, great. But when an AI is designed for something malicious like deep fakes, the danger is unprecedented.”
Software Brings Photos to Life
Genealogy website MyHeritage introduced the animation engine last month. The technology, known as Deep Nostalgia, lets users animate photos via the MyHeritage website. A company called D-ID designed algorithms for MyHeritage that digitally recreate the movement of human faces. The software applies the movements to photographs and modifies facial expressions to move as human faces usually do, according to the MyHeritage website. Deep Nostalgia shows that deep-fake technology is becoming more accessible, Lior Shamir, a professor of computer science at Kansas State University, said in an email interview. It’s progressing quickly and eliminating even the subtle differences between the fake and real video and audio. “There has also been substantial progress towards real-time deep fake, meaning that convincing deep fake videos are generated at the time of video communication,” Shamir said. “For instance, one can have a Zoom meeting with a certain person, while seeing and hearing the voice of a completely different person.” There are also a growing number of language-based deep fakes, Jason Corso, the director of the Stevens Institute for Artificial Intelligence at the Stevens Institute of Technology, said in an email interview. “Generating whole paragraphs of deep fake text toward a specific agenda is quite difficult, but modern advances in deep natural language processing are making it possible,” he added.
How to Detect a Deep Fake
While deep-fake detection technology is still in a nascent stage, there are a few ways you can spot one, Corso said, starting with the mouth. “The variability in the appearance of the inside of the mouth when someone is speaking is very high, making it difficult to animate convincingly,” explained Corso. “It can be done, but it is harder than the rest of the head. Notice how the Deep Nostalgia videos do not demonstrate an ability for the photograph to say ‘I love you’ or some other phrase during the deep fake creation. Doing so would require the opening and closing of the mouth, which is very difficult for deep fake generation.” Ghosting is another giveaway, added Corso. If you see blurring around the edges of the head, that’s a result of “fast motion or limited pixels available in the source image. An ear could partially disappear momentarily, or hair could become blurry where you wouldn’t expect it to,” he said. You also can look out for color variation when trying to spot a deep fake video, such as a sharp line across the face, with darker colors on one side and lighter on the other. “Computer algorithms can often detect these patterns of distortion,” said Shamir. “But deep fake algorithms are advancing rapidly. It is inevitable that strict laws will be required to protect from deep fake and the damage that they can easily cause.”