AmericasDrones - Latest News, Features & Expert OpinionTechnology

US Army Developing System to Enable Soldier-Robot Dialogue

US Army researchers are developing an algorithm-based learning system to enable robots to communicate with soldiers during operations.

The effort is to overcome communication barriers robots face when encountering unfamiliar objects and uncertain situations in a “tactical-environment.”

Dr. Felix Gervits, a researcher at the US Army Combat Capabilities Development Command (DEVCOM), said that the robots should be able to ask questions to overcome the uncertainty.

Challenges

The challenge, however, is to figure out the kind of questions a robot should ask, Gervits explained.

To overcome the challenge, the researchers have developed a three-part learning tool:

  • A software platform for online experimentation
  • A labeling scheme for categorizing questions
  • A labeled HuRDL (Human-Robot Dialogue Learning) corpus

The HuRDL corpus contains transcribed and labeled dialogues “from the study for various question types.”

Understanding Human-Robot Interaction

In collaboration with the Naval Research Laboratory and Tufts University, DEVCOM researchers “collected dialogue data from a human-robot interaction study to investigate how robots should ask questions when confronted with novel concepts.”

Gervits says the study will contribute to the broader effort of developing “algorithms to support automated question generation and learning for Soldier-robot teaming.”

“The broad goal of this research is to improve Soldier-robot teaming in tactical environments by enabling robots to ask questions and learn in real-time through dialogue,” Gervits said. 

“The current research is a step toward this goal because it highlights the kinds of questions that people ask when encountering unfamiliar concepts,” he added.

“The data that we collected in the HuRDL corpus can be used to train robots to ask specific questions when they encounter similar kinds of uncertainty. They can then learn from the responses or ask follow-up questions if necessary.”

The researchers have also addressed the problem of robots asking too many questions, thus frustrating or overwhelming the responders and causing disruption. They have analyzed “effective dialogue strategies and question types used in the study” to overcome barriers to an efficient conversation.

Virtual Experiment

Meanwhile, researchers have readied a software platform for online experimentation that “allows for crowdsourced data collection in which people from all over the country can participate remotely in virtual ARL experiments.”

The platform allows participants to control a robot in a “virtual 3D environment that contains a variety of unfamiliar objects and locations.”

Participants were asked to locate and move particular objects with unusual names and properties then ask researchers several questions due to their unfamiliarity with the objects. The questions were then recorded for analysis.

Annotation Scheme

Based on the analyses, researchers formulated an “annotation scheme, or a method to structure dialogue data.” The method classifies questions “based on their form and function.”

“This way of rigorously structuring dialogue data is an essential step in the process of developing automated approaches to question generation for robot learning through dialogue,” Gervits said.

This annotation scheme has been applied to the HuRDL corpus.

Related Articles

Back to top button