In the Army’s hot pursuit of integrating autonomous vehicles into its forces, the service wants to ensure soldiers will trust and know how to work with new artificial intelligence “teammates.”
Even the most advanced technologies on the battlefield will mean very little if operators do not know how to use them or trust them. Recent research into the military’s AI investments found a critical lack of examination of human-machine trust, something that the Army appears to be trying to improve upon with exercises designed around soldier-robot interactions.
For its next-generation ground vehicles and future robotic vehicle development, the Army is following a mantra of “soldiers must touch the equipment,” Maj. Gen. Ross Coffman said during a virtual event hosted by the Center for Strategic and International Studies. Coffman leads the cross-functional team working to field the next-generation ground combat vehicle, a major effort to replace decades-old combat vehicles with technology-enabled systems.
“Without those soldier touchpoints, we fully understand we would not be serving our customer,” he said. Coffman said there was a “platoon of robots” sent to Fort Carson in Colorado that every day underwent integration into soldier training for six weeks. For those who can work directly with robots, Coffman added there are exercises at least “once a quarter.”
Other initiatives go beyond in-person training with virtual exercises to familiarize soldiers with robots that can’t be sent to them.
“That doesn’t mean it’s over a camera. They are actually learning how to fight and use them in a computer-simulated game,” he said.
It’s unclear which robotic vehicles the Army is using in its testing exercises, but many autonomous vehicles are in the works. Some of the new vehicles the Army is designing range from small voice-activated robots for bomb disposal and reconnaissance to large “optionally manned” troop-carrying vehicles designed to follow other vehicles in convoy. Many will still rely on human directions, be they broad voice commands like “go look inside that building” or following a human-driven vehicle.
But despite all the money going into developing the technology itself, a research paper from the Center for Security and Emerging Technology (CSET) found little evidence of developing the machines to interact well with humans.
“If the person doesn’t trust the system that is providing recommendations, then we are losing a lot of money that went into developing these technologies,” Margarita Konaev, lead author on the paper told FedScoop in October. She added that DOD-backed research on it was “something that we were expecting to see, but it really was not something that we found.”
Read the original article at FedScoop.