Imagine an artificial-intelligence-driven military drone capable of autonomously patrolling the perimeter of a country or region and deciding who lives and who dies, without a human operator.
Now do the same with tanks, helicopters and biped/quadruped robots. Welcome to the not-so-distant future of LAWs, or lethal autonomous weapon systems. A conclusion reached at the UN conference on regulating LAWs in warfare that took place this August in Geneva was that, instead of outright banning them, the topic should be revisited in November. The stall was initiated by the U.S., Russia, Israel, South Korea and Australia. Until the revision meeting one thing is sure — AI-controlled robotic warfare isn’t too far off.
Not everyone shares this sentiment, though. In July, 2,400 leading artificial-intelligence (AI) researchers, including Tesla
CEO Elon Musk, signed a pledge against killer robots, promising not to participate in the development or manufacture of machines that can identify and attack people without human oversight. It may sound encouraging, but countries can easily source the necessary know-how and tools to build their lethal “tin men” even without these researchers joining the team.
So what’s the big deal with AI-run war machines? Proponents list horrendous cases where many civilian lives were lost because of misjudgment. “If an AI had been making decisions,” they reason, “it wouldn’t have made the same mistake and innocent lives would’ve been saved.”
Would they really? Although AI can recognize images pretty well, those familiar with its learning patterns can easily exploit them. For example, MIT students in Cambridge, Mass., used a 3D print of a turtle with a few altered elements and managed to fool the AI in the process. The result? AI decided that the turtle was a rifle. They did the same thing with a baseball, tricking the AI into thinking it was an espresso cup.
A group of researchers from Kyushu University in Japan used another trick — they changed a single pixel in a photo. This was enough for the AI to mistake cats for dogs. Or airplanes for dogs. Or frogs for trucks.
Now imagine a drone mistaking a hospital for a military installation. Or terrorists using specially masked vehicles to pass through AI-controlled defenses unscathed.
OK, AI can be fooled. But deep learning is still in its infancy — it needs more data to become much more robust, and then it will be harder to fool. Problem solved, right? Fooling AI is hardly the only issue. What about hijacking? In the following video by Wired, Andy Greenberg takes an SUV for a spin on the highway while two hackers attack it from miles away. The hackers take control over the car’s steering, transmission and brakes.
How did they do it? They developed software designed to connect to the vehicle’s online entertainment system. In warfare, AI units can function autonomously, but in the end they need a way to communicate with one another and to transfer data to a command center. This makes them vulnerable to hacking and hijacking.
What would happen if one of these drones or robots was hijacked by an opposite faction and started firing on civilians? A hacker would laugh. Why? Because he wouldn’t hijack just one. He would design a self-propagating virus that would spread throughout the AI network and infect all units in the vicinity, as well as those communicating with them. In a split second, an entire squad of LAWs would be under enemy control.
Some proponents of LAWs in warfare, such as professor Ronald Arkin, a roboticist at the Georgia Institute of Technology, say an additional element (called an “ethical governor”) can be embedded into a machine, providing an extra layer of evaluation for each lethal response (making sure it’s within the realm of Laws of War and Rules of Engagement). Ultimately, it’s still irrelevant. Every machine can be overridden, tricked, hijacked and manipulated with an efficiency that’s unheard of in the realm of human-operated traditional weaponry.
However, the U.S. government remains oblivious. DARPA (Defense Advanced Research Projects Agency) has already announced a $2 billion development campaign for the next wave of technologically advanced AI (dubbed “AI Next”). One of the goals is to have the machines “acquire human-like communication and reasoning capabilities, with the ability to recognize new situations and environments and adapt to them.”
I may be overreaching here, but the UN meeting on one end and this announcement on the other, make me think that the U.S. government isn’t just pro-robotic — it may already have a LAWs ace up its sleeve. I hope that’s the card it never decides to play. If it does, it could usher in a new era of mass destruction on an unprecedented scale.
What do you think about killer robots replacing human combatants? Please let me know in the comment section below.
Create an email alert for Jurica Dujmovic’s Your Digital Self columns here
Get the top tech stories of the day delivered to your inbox. Subscribe to MarketWatch’s free Tech Daily newsletter. Sign up here.