(May 28, 2010) - You won't see mechanized humanoid soldiers on the battlefield soon, but robotic air and land units are here. Thousands of them aid in disarming bombs, and a variety of surveillance drones soar the skies, ranging from the handheld RQ-11 Raven to the 44-foot (13.4-meter) RQ-4 Global Hawk. And yes, some of them pack heat.
Unmanned Aerial Vehicles (UAVs)
"The most developed [drones] are of course in the air," says Noel Sharkey, professor of artificial intelligence and robotics at the University of Sheffield. "The MQ-1 Predator and the MQ-9 Reaper top the list at the moment -- and that's just the United States. The Predator has flown over a million flight hours now, and its use is accelerating. It's the most sought-after resource in the U.S. military."
The two different UAVs can deliver heavy payloads via strategic "decapitation strikes" without putting human pilots at risk. Furthermore, the Predator and Reaper can each remain airborne for more than 24 hours at a time -- and that's just the beginning.
"At the moment, they fly for 24 to 26 hours and then land while another plane takes over," Sharkey says, "but there's a massive program funded by DARPA (Defense Advanced Research Projects Agency) called the Vulture, which entails the development of a five-year battery that will keep a payload of over 1,000 pounds (454 kilograms) in the air. So the idea is to try and get this continuous red dot on the enemy, so you can disarm a village and then hover over it for five years."
The Road to Autonomy
UAVs need remote human pilots to carry out their missions, but they require far less human interaction than you might think.
"These are all man-in-the-loop systems, which means essentially that someone controls the applications of lethal force," Sharkey says. "They're not exactly remote control. They're sort of a hybrid. They have certain autonomous functions, meaning that they can be programmed to react to their GPS so they can go about on their own. They can navigate themselves, though a pilot will control their height and that sort of thing."
As the technology improves, so will a UAV's ability to function without human guidance. Such advancements will steer the transition from man-IN-the-loop systems to man-ON-the-loop systems. The difference between the two, Sharkey stresses, is key.
"It's the first step toward full autonomy," Sharkey says. "The most recent U.S. Air Force documents describe a swarm of planes. The term 'swarm' is kind of a technical term in robotics, meaning a bunch of robots that interact with one another on a local basis. The man-on-the-loop would be in executive control of this swarm, so rather than having at least two pilots in charge of a Predator, you'll have one person in charge of a swarm of robots."
This technology would allow the planes to coordinate with each other during the actual attack, but the human executive pilot could easily alter or cancel the attack plan at any moment.
The development of robotic ground units has long lagged behind that of high-flying UAVs. The Talon Sword, for instance, earned the nickname TRV or "Taliban Resupply Vehicle" during tests due to the ease with which it could be tipped over and robbed of its armaments.
In 2001, however, the United States military set the goal of turning one-third of its ground vehicles robotic by 2015. The resulting rate of technological advancement has been well illustrated by the DARPA Challenge, a prize-driven competition for the development of driverless vehicles.
"Out of all that came the Crusher, made by Carnegie Mellon and funded by DARPA to the tune of 18 million dollars," Sharkey says. "There's no driver's seat. It's fully autonomous, but it can be remote-controlled as well. It's called the Crusher because it's a seven-and-a-half-ton truck, and it usually demonstrates its power by crushing Cadillacs. It's quite an incredible vehicle -- really rugged, very flexible and very, very powerful. So that will come into play over the next 10 years in an armed form."
The Principle of Distinction
The possibility of a robot uprising doesn't weigh too heavily on the minds of most roboticists, but that doesn't mean they're not concerned. In addition to U.S. government mandates for an increasingly robotic armed forces, an estimated 43 countries are currently engaged in the proliferation of armed aerial robots.
"There are an incredible number of concerns," Sharkey says, "and the most important one for me -- and I've been thumping my fist on the table about it for about 30 or 40 years now -- is that robots are not in any position to discriminate targets."
The laws of warfare refer to this concept as the principle of distinction: Any weapon used in war must be able to discriminate combatants from civilians. In addition, an autonomous robotic weapon would need to identify surrendering enemies, wounded soldiers and clergy.
"There are no artificial intelligence systems around at the moment that can even begin to do this kind of discrimination," Sharkey says, "But you could have some robots that are fully autonomous and could find targets in a selected region and kill them."
In conventional warfare, such indiscriminate attacks could be used in a kill box, a designated area with a sufficient density of enemy combatants. The problem is that conventional warfare has largely disappeared, only to be replaced by insurgent warfare within civilian-occupied zones.
"It's very difficult for a human soldier to tell the difference, but we can reason," Sharkey says. "We have common sense. Speaking as a computer scientist, no matter how trivial we build a program, we have to specify the target problem and there is no specification for civilian. So you can't write a simple program to discriminate, and even if you could, our current visual systems aren't up to it."
For an Ethical Architecture
Not everyone agrees that an ethical robot soldier is impossible. Georgia Tech's Professor Ronald Arkin has been researching whether a machine, given the proper hardware and software, could wage war more effectively and ethically than a human. The project, which Arkin completed in December 2009, is the subject of his book "Governing Lethal Behavior in Autonomous Robots."
"The project was concerned with finding a way to imbed within an intelligent autonomous system the ability to adhere to the laws of war as encoded in the Geneva Convention and as consistent with the rules of engagement," Arkin says.
Arkin stresses that the ethical governor is intended as a single component in an overall ethical architecture. It can't replace soldiers. Instead, the project focused on areas where ethical questions are more easily handled, such as when targeting a sniper adjacent to a mosque. Should the robot launch a grenade or fire a single bullet? The more ethical choice is obvious.
Professor Sharkey, on the other hand, dismisses the notion of the ethically superior robotic solider.
"There are some high-level people in the U.S. military who say robots could make better soldiers than soldiers," Sharkey says. "They could behave more humanely. They don't get angry. The don't seek revenge. They don't get hungry, and they follow their orders very exactly. The implications are that somehow the robot is thinking like a human when that's still the subject of artificial intelligence. Robotics is very advanced in the laboratory, but we're talking here about outside in the real world, with people's lives at stake, and it's a whole other issue."
Another frequently raised issue is that in removing the risk of human casualties, a robotic army would also effectively remove one of the prime deterrents to armed aggression and armed response in the modern era.
Human Wars, Robot Soldiers
Professors Sharkey and Arkin may differ on the potential for an ethical robotic soldier, but they agree that autonomous weapons systems are likely inevitable. They both advocate an open discussion of the issues and a full understanding of the technology involved.
"Unless it is banned by international law, which I'm not adverse to, then something must be done to ensure that these systems operate safely and adhere to our nation's principles," Arkin says. "Not only from a friendly perspective, but from a noncombatant perspective."
Both Arkin and Sharkey both frequently talk with universities, international governments and the global media in order to raise awareness of these issues.
Sharkey co-founded the International Committee for Robotics Arms Control (ICRAC) in 2009, and the committee plans to hold its first major conference this September. Arkin will travel to Geneva in June to discuss the possible implementation of treaties with both the ICRAC and the Red Cross.
"The most important thing is that these discussions be held," Arkin says.
Originally published at Discovery News on May 28, 2010