The Defense Advanced Research Projects Agency (DARPA) has launched a program to improve the security of US military mixed reality (MR) solutions against cognitive exploits.
Cognitive exploits are digital assaults on the connection between users and their virtual equipment.
Known cognitive exploits are real-world object planting to clutter virtual projected displays and overwhelm personnel with confusing false alarms, information flooding to induce motion sickness, and adding virtual data for distraction.
Effects include cybersickness, emotional manipulation, anxiety, and reduced trust in MR platforms.
Mathematical Solutions to Secure Personnel
The Intrinsic Cognitive Security (ICS) program aims to mitigate potential threats to personnel in a virtual domain.
It will research and test mathematical approaches, known as “formal methods,” producing solutions to secure MR system designs against cognitive attacks.
Through this approach, ICS will ensure that MR capabilities operate smoothly before the technology is widely leveraged for military missions, the agency noted.
“We need to develop methods to protect mixed reality systems before systems lacking protections are pervasive,” DARPA ICS Program Manager Dr. Matthew Wilding explained.
“This program will show how to protect personnel using rigorous, math-based development practices that enable MR adoption plans in [Department of Defense] organizations.”
Wilding said that the user behaviors to be studied for the program will contribute to formalizing data on how people behave in an immersive setup.
“ICS does not have a sole MR system in mind. Instead, proposers will work with various commercial technologies performing different MR-related tasks,” he added.
DARPA’s Intrinsic Cognitive Security Program
ICS will run for three years and occur in two phases.
The initial phase will focus on fabricating proven solutions and supporting models to further understand the desirable properties of MR systems.
The second phase will build on results to validate the utility of these solutions in MR systems. Prototypes based on commercially available hardware and software will be developed to test how the solutions can decrease vulnerabilities against cognitive exploits.