AmericasCyberTechnology

US Army Researchers Develop Tool to Detect Deepfakes 

US Army researchers have developed a tool that helps detect deepfake content, which can be used by adversaries to compromise national security.

Deepfake is any artificial intelligence (AI)-synthesized hyper-realistic image, video, or audio content that falsely depicts individuals saying or doing something, the Army Research Laboratory’s (ARL) Dr. Suya You and Dr. Shuowen (Sean) Hu explained.

Criminals can use deepfake technology to cyberbully, defame, or blackmail individuals by superimposing content onto their faces. Culprits can also use this technology to scam people by mimicking their voices.

Deepfake to Generate Social Unrest

Clint Watts, a researcher at the Philadelphia-based Foreign Policy Research Institute, wrote in the Washington Post that countries such as China and Russia — with AI capabilities rivaling that of the US — can use deepfake methods to “incite fear inside of Western democracies and distort the reality of American audiences.”

Watts further explained that it has become more difficult to refute a “falsified video after it has been watched” in an increasingly cyber-engaged world.

He specifically flagged the potential use of the technology to target US officials and democratic processes and foment social unrest by making groups of people amass under false pretenses.

DefakeHop to Counter Deepfake

US Army researchers working on counter-deepfake technology said that the new method, called DefakeHop, will be less complex and easier to use than existing detection methods.

“Due to the progression of generative neural networks, AI-driven deepfake advances so rapidly that there is a scarcity of reliable techniques to detect and defend against deepfakes,” You said.

“There is an urgent need for an alternative paradigm that can understand the mechanism behind the startling performance of deepfakes and develop effective defense solutions with solid theoretical support.”

More Robust, Scalable, and Portable

Researchers claim that DefakeHop has an edge over traditional detection methods, as they are based on complex machine learning tools that lack “robustness, scalability, and portability.”

“This research provides a robust spatial-spectral representation to purify the adversarial inputs, thus adversarial perturbations can be effectively and efficiently defended against,” the US Army said in a statement.

The framework of the new method, researchers said, combines principles from machine learning, signal analysis, and computer vision.

They hope that in the future soldiers can “carry intelligent yet extremely low size–weight–power vision-based devices on the battlefield.”

“The developed solution has quite a few desired characteristics, including a small model size, requiring limited training data, with low training complexity and capable of processing low-resolution input images. This can lead to game-changing solutions with far-reaching applications to the future Army,” You added.


Related Articles

Back to top button