Biden, who speaks five languages, shows the potential and risks of deepfake technology

WASHINGTON — At a workshop hosted Aug. 26 by the Air Force Military University in Montgomery, Alabama, students were shown a video of President Joe Biden addressing the UN while effortlessly switching between five languages , including Mandarin and Russian.
While Thomas Jefferson and John Quincy Adams were fluent in several languages, Biden, like most US presidents, speaks only English.
The video was a piece of synthetic media better known as “deepfake”. Created using a combination of machine learning and artificial intelligence, deepfakes are hyper-realistic videos that substitute the likeness of one person for another, or appear to show them doing something they never did.
And as technology improves, they become harder to detect.
The workshop, which was attended by DeepMedia, a synthetic media start-up company, focused on giving aviators an introduction to deepfakes and their relationship to military applications, as well as the company’s AI technologies used to combat the threat could be used that they can represent.
Earlier this year, a deepfake video showed Ukrainian President Volodymyr Zelenskyy urging his soldiers to lay down their arms and surrender amid the country’s conflict with Russia. Although the video was quickly debunked and removed, the technology is increasingly being used in propaganda and disinformation campaigns.
Rijul Gupta, co-founder of Oakland, California-based DeepMedia, called deepfakes the “next frontier” in video technologies, citing their low production costs and relatively quick turnaround time.
“Everything will be able to be faked in real time,” he said at the workshop.
The company created the Biden video to illustrate that not all deepfakes are malicious. DeepMedia’s AI-based universal translator tool, for example, can be used to facilitate diplomatic communication, it said.
The translator tool takes a video of someone speaking one language and converts it so that the person appears and sounds another language in real-time. The technology could also be used to determine the linguistic benchmarks that distinguish a deepfake from a real video.
Hollywood has been using some form of deepfake technology for years to make older actors appear decades younger or even give new “life” to dead actors. Actor Peter Cushing appeared in Rogue One: A Star Wars Story in 2016, more than two decades after his death.
Nonetheless, deepfakes of celebrities and other politicians have become widespread on social media video platforms, raising concerns about the potential use of synthetic media to spread disinformation.
According to a 2021 US Department of Homeland Security report, the threat posed by deepfakes does not come from the technology used to create them, but from people’s natural tendency to believe what they see. Consequently, deepfakes don’t have to be particularly sophisticated or credible to be effective in spreading misinformation, the report states.
DeepMedia focuses on both detection and production of synthetic media. In April, the Air Force Research Laboratory announced a partnership with the company to study deepfake detection.
One of the most important ways to determine if a video is a deepfake is to analyze the language used, which can reveal dialect differences between the subject of the video and the audio presented.
Analyzing videos for these differences takes time, money and effort, said Emma Brown, a co-founder of the company.
In many cases, the cryptological linguist performing the analysis must have a deep understanding of the language and its dialects, as well as the culture and history of the country. Just a cursory knowledge of the language is not enough to locate the flaws in deepfakes, she said.
While the military is grappling with a major recruitment crisis, attracting the talent needed to fill cryptological linguist positions poses another challenge. To replace human talent, Brown and Gupta explained that DeepMedia has developed AI tools to replace linguists.
“There just aren’t enough people to do this without AI,” Gupta said.
Catherine Buchaniec is a reporter at C4ISRNET, where she covers artificial intelligence, cyber warfare and unmanned technologies.
#Biden #speaks #languages #shows #potential #risks #deepfake #technology Source