Artificial intelligence

DARPA invests in AI that can translate instruction manuals into augmented reality

A U.S. Air Force Airman assigned to Dyess Air Force Base, Texas, tries on an argumented reality headset April 18, 2019. (U.S. Air Force photo by Airman 1st Class Mercedes Porter)

WASHINGTON – The Defense Advanced Research Projects Agency has issued a $5.8 million contract to a team building an artificial intelligence system able to scan instruction manuals and convert that data into instructions for augmented reality systems.

Companies are already using augmented reality technologies in their manufacturing processes. Lockheed Martin, for example, uses augmented reality goggles in assembling its space systems for NASA. With the goggles on, technicians can see relevant information and instructions in the space around them as they go about their work, saving them from having to constantly walk back and forth to consult physical manuals or computer monitors.

Under the $5.8 million contract, PARC, a Xerox company, will work with the University of California at Santa Barbara, the University of Rostock in Germany and Patched Reality on the Autonomous Multimodal Ingestion for Goal-Oriented Support (AMIGOS) project for the Perceptually-enabled Task Guidance Program. In short, the goal is to take the existing paper and video manuals used today and automatically convert them for use in augmented reality systems.

“Augmented reality, computer vision, language processing, dialogue processing and reasoning are all AI technologies that have disrupted a variety of industries individually but never in such a coordinated and synergistic fashion,” said Charles Ortiz, the principal investigator for AMIGOS, in a statement. “By leveraging existing instructional materials to create new AR guidance, the AMIGOS project stands to accelerate this movement, making real-time task guidance and feedback available on-demand.”

The teams will deliver two different but related systems to DARPA. The first is an artificial intelligence system that will be able to extract task information from texts, illustrations, and videos. The second system will take that information and create an augmented reality guidance based on it. Moreover, the second AI will be able to deliver tasks and information in a personalized way based on the user’s skills and emotional state.

PARC is the lead contractor on the project.

Nathan Strout covers space, unmanned and intelligence systems for C4ISRNET.


Donovan Larsen

Donovan is a columnist and associate editor at the Dark News. He has written on everything from the politics to diversity issues in the workplace.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button