Autonomous weapons guided by artificial intelligence are already in use. Researchers, legal experts and ethicists are struggling with what should be allowed on the battlefield.
In the russia’s heinous military aggression against Ukraine, video footage has shown Ukrainian drones penetrating deep into russian territory, more than 1,000 kilometres from the border, and destroying oil and gas infrastructure. It is likely, experts say, that artificial intelligence (AI) is helping to direct the drones to their targets. For such weapons, no person needs to hold the trigger or make the final decision to detonate.
“The technical capability for a system to find a human being and kill them is much easier than to develop a self-driving car,” says computer scientist and campaigner against AI weapons Stuart Russell. Some argue that accurate AI weapons could reduce collateral damage while helping vulnerable nations to defend themselves. At the same time, observers are concerned that passing targeting decisions to an algorithm could lead to catastrophic mistakes. The United Nations will discuss AI weapons at a meeting this September — potentially a first step towards controlling the new threat.