War ethics: Are drones in Ukraine a step toward robots that kill? – The Christian Science Monitor

Link copied.
We want to bridge divides to reach everyone.
A selection of the most viewed stories this week on the Monitor’s website.
Every Saturday
Hear about special editorial projects, new product information, and upcoming events.
Occasional
Select stories from the Monitor that empower and uplift.
Every Weekday
An update on major political events, candidates, and parties twice a week.
Twice a Week
Stay informed about the latest scientific discoveries & breakthroughs.
Every Tuesday
A weekly digest of Monitor views and insightful commentary on major events.
Every Thursday
Latest book reviews, author interviews, and reading trends.
Every Friday
A weekly update on music, movies, cultural trends, and education solutions.
Every Thursday
The three most recent Christian Science articles with a spiritual perspective.
Every Monday
Loading…

Amid the bewildering array of brutality in Ukraine, military ethicists have been keeping a close eye on whether the war could also become a proving ground for drones that use artificial intelligence, or AI, to decide whom to hurt. 
Early in the war, Moscow was rumored to be employing “kamikaze” drones as “hunter-killer robots.” The Russian company that created the weapon boasted of its AI skills – the kind that could potentially enable a machine rather than a human to choose its targets. But the consensus among defense analysts has been that these claims were more marketing hype than credible capability.
At some point, militaries will likely allow artificial intelligence to decide when to pull the trigger – and on whom. Ukraine is showing just how close that moment might be.
Yet there’s no doubt that the demand for AI in drones has been voracious and growing. And if humans for now are pulling the trigger, so to speak, “I don’t think that will last over time,” says Paul Scharre, an expert who formerly worked on autonomous systems policy at the Pentagon. 
The question of whether a weapon is ethical is answered in large part, adds security expert Gregory Allen, on whether it’s in the hands of a military “that has any intention of behaving ethically.” 
Amid the bewildering array of brutality on and off the battlefields of Ukraine, military ethicists have been keeping a close eye on whether the war could also become a proving ground for drones that use artificial intelligence to decide whom to hurt. 
Early in the war, Moscow was rumored to be employing “kamikaze” drones as “hunter-killer robots.” Though the Russian company that created the weapon boasted of its AI skills – the kind that could potentially enable a machine rather than a human to choose its targets – the consensus among defense analysts has been that these claims were more marketing hype than credible capability.
Yet there’s no doubt that the demand for AI in drones has been voracious and growing. The drones on display in Ukraine all have a human pulling the trigger, so to speak – for now. “I don’t think we see any significant evidence that AI or machine learning is being employed in Ukraine in any significant way for the time being,” says Paul Scharre, who previously worked on autonomous systems policy at the Pentagon. 
At some point, militaries will likely allow artificial intelligence to decide when to pull the trigger – and on whom. Ukraine is showing just how close that moment might be.
“But I don’t think that will last over time,” he adds.
That’s because before the war, drones were seen as a useful counterterrorism tool against adversaries without air power, but not as particularly effective against big state actors who could easily shoot them down. The current conflict is proving otherwise. 
Wars have a way, too, of driving technological leaps. This one could teach combatants – and interested observers – lessons that bring the world closer to AI making “kill” decisions on the battlefield, analysts say. “I think of the Ukrainian war as almost a gateway drug to that,” says Joshua Schwartz, a Grand Strategy, Security, and Statecraft Fellow at Harvard University.
In March 2021, a great fear among AI ethicists was realized: A United Nations panel of experts warned that Turkey had deployed a weapon to Libya – the Kargu-2 quadcopter drone – that could hunt down retreating troops and kill them without “data connectivity” between the human operator and the weapon. 
Along with the flood of commentary decrying the use of so-called terminator weapons, a report from the U.S. Military Academy at West Point was more circumspect. It argued that ethical debate surrounding the Kargu-2 should concentrate not on whether it had killed autonomously, but whether in doing so it had complied with laws of armed conflict. 
“The focus of humanitarian concerns should be the drone’s ability to distinguish legitimate military targets from protected civilians and to direct its attacks against the former in a discriminate manner,” wrote the paper’s author, Hitoshi Nasu, a professor of law at West Point. 
Unlike the first iteration of autonomous weapons – land mines, for example – today’s AI systems are typically designed to avoid civilian casualties. For this reason, people should be more inclined to “embrace them,” Mr. Nasu posited in an interview with the Monitor.
But many critics can’t shake sci-fi-fueled biases, he adds.
“I remember someone writing on this topic, ‘It must be very scary for someone to be targeted by an autonomous system,’” Mr. Nasu says. “Well, warfighting is very scary – whether it’s done by autonomous systems or human beings.”  
Critics point out, however, that such a scenario might be especially alarming for soldiers trying in vain to, say, surrender to a machine. This requires “granting quarter” – showing mercy to an enemy laying down arms – and it’s unclear whether the Kargu-2, in the face of “technical challenges,” is capable of it, Mr. Nasu wrote in his paper. 
“Abandoning a weapon can be a machine-detectable surrender event,” but precisely what happens when combatants who “express an intention to surrender” are, due to injury, “unable to jettison their weapons,” he acknowledges, is unclear. 
The AI that gives these drones the ability to hunt human targets involves “machine learning” technology that has advanced considerably in the past decade. 
It relies on an enormous digital library of pictures on which the machine can draw. “If you have images of enemy tanks, but only in certain lighting conditions, or only when they’re not partially obscured by vegetation, or without people crawling on them, then the machine learning the lessons might not recognize the target and make mistakes,” says Mr. Scharre, now vice president of the Center for a New American Security.
Drones with the AI to effectively employ such machine learning, however, are still “5 to 10 years” away, he says. After that, “we’ll see machine learning embedded for target recognition that could open doors to AI making its own targeting decisions.”  
The fact that the technology is for now unreliable is not the only reason it is generally unattractive to military professionals. 
Good commanders tend to seek the ability to “carefully calibrate the tempo and severity of a conflict,” says Zachary Kallenborn, research affiliate with the National Consortium for the Study of Terrorism and Responses to Terrorism. 
This in turn allows them “to achieve an objective and then stand down – like punishing an adversary for taking an action you don’t like but not creating a global war.” 
Military leaders are also wary of widespread use of AI in drones because of their potential to change what’s known in Pentagon parlance as “targeting incentives” on the battlefield. 
This involves scaring or cowing an enemy enough to get them to do things like surrender – or at least run away. 
Instead of pitting robots against other robots, strategists might decide to destroy an area where the humans waging war will actually suffer harm. This includes “cities, or anywhere they can really do damage,” Mr. Kallenborn adds. “I can see warfare escalating very quickly there.”
Officials in China, America’s greatest AI competitor, have expressed anxiety about precisely this, Gregory Allen, former director of strategy and policy at the Pentagon’s Joint Artificial Intelligence Center, wrote in a paper on the topic. “One official told me he was concerned that AI will lower the threshold of military action, because states may be more willing to attack each other with AI military systems due to the lack of casualty risk.” 
Influential Chinese think tanks have echoed these warnings but, as in the U.S., it has not stopped governments who don’t want to be left behind in an AI arms race from pursuing the technology. 
Still, one of the takeaways in the ongoing tragedy of Ukraine is “how much it’s been a 20th-century war, primarily dominated by mud and steel rather than whiz-bang new technology,” Mr. Scharre says, raising the possibility, he argues, that “this fixation on fancy weapons as the ethical problem is mistaken.”
Mr. Allen, now director of the Project on AI Governance at the Center for Strategic and International Studies, concurs. “For a long time the autonomous weapons debate has been heavily focused on whether or not it increases the risk of technical accidents,” including killing civilians, he says in an interview. “But the war in Ukraine is a great reminder that while unintentional harm to civilians is a real tragedy, there is also the unsolved problem of intentional harm to civilians.” 
Get stories that
empower and uplift daily.
That’s a challenge that spans from old technology to new.
The conflict has made it increasingly clear that the question of whether a weapon is ethical is answered in large part, Mr. Allen adds, on whether it’s in the hands of a military – or a country – “that has any intention of behaving ethically.” 
Already a subscriber? Login
Monitor journalism changes lives because we open that too-small box that most people think they live in. We believe news can and should expand a sense of identity and possibility beyond narrow conventional expectations.
Our work isn’t possible without your support.
Already a subscriber? Login

Link copied.
We want to hear, did we miss an angle we should have covered? Should we come back to this topic? Or just give us a rating for this story. We want to hear from you.
Dear Reader,
About a year ago, I happened upon this statement about the Monitor in the Harvard Business Review – under the charming heading of “do things that don’t interest you”:
“Many things that end up” being meaningful, writes social scientist Joseph Grenny, “have come from conference workshops, articles, or online videos that began as a chore and ended with an insight. My work in Kenya, for example, was heavily influenced by a Christian Science Monitor article I had forced myself to read 10 years earlier. Sometimes, we call things ‘boring’ simply because they lie outside the box we are currently in.”
If you were to come up with a punchline to a joke about the Monitor, that would probably be it. We’re seen as being global, fair, insightful, and perhaps a bit too earnest. We’re the bran muffin of journalism.
But you know what? We change lives. And I’m going to argue that we change lives precisely because we force open that too-small box that most human beings think they live in.
The Monitor is a peculiar little publication that’s hard for the world to figure out. We’re run by a church, but we’re not only for church members and we’re not about converting people. We’re known as being fair even as the world becomes as polarized as at any time since the newspaper’s founding in 1908.
We have a mission beyond circulation, we want to bridge divides. We’re about kicking down the door of thought everywhere and saying, “You are bigger and more capable than you realize. And we can prove it.”
If you’re looking for bran muffin journalism, you can subscribe to the Monitor for $15. You’ll get the Monitor Weekly magazine, the Monitor Daily email, and unlimited access to CSMonitor.com.
Subscribe to insightful journalism
A selection of the most viewed stories this week on the Monitor’s website.
Every Saturday
Hear about special editorial projects, new product information, and upcoming events.
Occasional
Select stories from the Monitor that empower and uplift.
Every Weekday
An update on major political events, candidates, and parties twice a week.
Twice a Week
Stay informed about the latest scientific discoveries & breakthroughs.
Every Tuesday
A weekly digest of Monitor views and insightful commentary on major events.
Every Thursday
Latest book reviews, author interviews, and reading trends.
Every Friday
A weekly update on music, movies, cultural trends, and education solutions.
Every Thursday
The three most recent Christian Science articles with a spiritual perspective.
Every Monday
Follow us:
Your subscription to The Christian Science Monitor has expired. You can renew your subscription or continue to use the site without a subscription.
Return to the free version of the site
If you have questions about your account, please contact customer service or call us at 1-617-450-2300.
This message will appear once per week unless you renew or log out.
Your session to The Christian Science Monitor has expired. We logged you out.
Return to the free version of the site
If you have questions about your account, please contact customer service or call us at 1-617-450-2300.
You don’t have a Christian Science Monitor subscription yet.
Return to the free version of the site
If you have questions about your account, please contact customer service or call us at 1-617-450-2300.

source
Connect with Chris Hood, a digital strategist that can help you with AI.

Leave a Reply

Your email address will not be published.

© 2022 AI Caosuo - Proudly powered by theme Octo