In the thick of the Iran-Iraq War, the USS Vincennes killed 290 civilians in the accidental downing of an Iranian airliner mistaken for an enemy warplane.
Described as an “”autonomous engagement,”” the ship went to full alert in 1988 because its Aegis computer combat system warned of impending
danger from an attacking military jet as Iran Air Flight 655 flew into the USS Vincennes’ 50-km area, explained Howie Marsh, senior defence analyst at the Conference of Defence Associations in Ottawa.
Although reports also blamed the disaster on an overzealous ship captain and an inexperienced crew, this is just one of the challenges as militaries around the world, including Canada’s, develop autonomous intelligent systems.
These are essentially software systems that can operate with “”a greater independence of human input,”” explained Bruce Digney, a defence scientist at Defence R&D Canada-Suffield in Alberta. His team is developing these technologies for the country’s military ground vehicles, although they can also be fitted into unmanned air (UAV) and marine vehicles.
Even Digney acknowledged the obstacles to putting this technology, which may be equipped with missiles, on to the world’s battlefields: “”If you have a machine that’s making decisions in the world for itself, how do you gain some trust, especially if you’re . . . expected to put your life in the decisions of that machine?””
A scenario related two years ago by the U.S. Department of State underscores the risks: UAVs may be harmful if countries or groups labelled enemies obtain them because they can deliver chemical and biological weapons, according to testimony given by Vann Van Diepen, Acting Deputy Assistant Secretary for non-proliferation, before the Senate Governmental Affairs Subcommittee on International Security, Proliferation and Federal Services.
By the same token, several incidents involving the crash of UAVs armed with missiles have marred the international campaign against terrorism in Afghanistan. Bad weather was often touted as the cause.
“”When you get into autonomous intelligent systems that apply lethal force, again, I don’t see us going in there because there isn’t a single commander that I know in the Canadian forces who is willing to allow an autonomous system to make a decision that could get him into court, and he’d be held liable,”” Marsh said.
Although non-lethal autonomous intelligent systems involving sensors are likely to proceed, he said, the more deadly brand equipped with weapons may not make it past the military’s legal department.
Ottawa-based Frontline Robotics Inc., which is developing these technologies, believes one of the greatest impediments to using these systems is asking the robots, or vehicles, to operate in an unconstrained world model in which “”you just literally drop it in the middle of nowhere and have it try to figure out what it can do next”” — a method requiring a great deal of computational power, said president and CEO Richard Lepack.
Rather, he said, robots can better perform in a defined environment that limits the number of variables.
There are, of course, advantages to a greater adoption of defence-related robotics, Lepack explained. Most notably, although an enemy or an intruder can destroy a robot, “”it’s a lot better than taking out a soldier.””
They can also help troops in vehicles to perceive the terrain ahead and decide which way to steer, said Digney, adding the allure of these independent networks is they deal with the problem of exhausted workers.
“”If you can have one person, one commander, controlling four, five, 10 or 20 vehicles, you have an effective multiplication of force.””
He said the U.S., clearly the leader in machine intelligence and autonomous vehicles and systems, has been investing “”tremendous amounts of money and resources”” into these areas.
Although DRDC-Suffield lacks similar capacity, it has been working on vehicle autonomy for about four years and has hired about 15 staff including scientists and technicians to support this program since 2002.
Yet Canadian industry players like Frontline Robotics have had no luck selling autonomous intelligence software to the Canadian military. “”We’re having a hard time getting a response from them,”” Lepack explained.
Lepack said the company has also deliberately steered clear of the U.S., unlike competitors Boeing Co. and General Dynamics. This is because the U.S., which wants to replace by 2015 one-third of its fighting force with robots, asks companies working with it on robotics technology not to export these systems elsewhere.
Founded in 2001, Frontline Robotics has secured its first contract with South Korea for robot open control technology, software that allows robots to act independently or collaboratively. Lepack explained the Asian government aims to use the system to patrol the border with North Korea for “”any anomalies, any intrusion”” by people or vehicles.
Digney said because the military needs high performance from its systems to be “”tactically relevant,”” a new generation of vehicles will likely be first tested in an environment of less demanding civilian applications such as mining and agriculture.
Autonomous vehicles will then be phased in to the Canadian armed forces, with only “”small parts of tasks”” being automated in the beginning, he explained. For instance, Canadian Forces might have a logistics system in which an actual operator initially drives a route but the vehicle independently returns along this path.
He added Canada’s new crop of unmanned ground vehicles is expected to debut in no earlier than 10 years to 15 years, because much of what human brains do so effortlessly is difficult to transcribe into software algorithms.
But before they hit the road, these vehicles — specifically the scope of their independence and weapons systems — will be hotly debated in military circles.
Day two: Network-centric warfare
Day three: Autonomous intelligent systems
Day four: Directed-energy weapons