Dangerous drones: Why flying robots are weapons and how to stop them

Robotics researchers have a duty to prevent autonomous weapons

Dangerous drones: Why flying robots are weapons and how to stop them

Robotics is rapidly being transformed by advances in artificial intelligence.

And the benefits are widespread: We are seeing safer vehicles with the ability to automatically brake in an emergency, robotic arms transforming factory lines that were once offshored and new robots that can do everything from shop for groceries to deliver prescription drugs to people who have trouble doing it themselves.

But our ever-growing appetite for intelligent, autonomous machines poses a host of ethical challenges.

Rapid advances have led ethical dilemmas

These ideas and more were swirling as my colleagues and I met in early November at one of the world’s largest autonomous robotics-focused research conferences – the IEEE International Conference on Intelligent Robots and Systems. There, academics, corporate researchers, and government scientists presented developments in algorithms that allow robots to make their own decisions.

As with all technology, the range of future uses for our research is difficult to imagine. It’s even more challenging to forecast given how quickly this field is changing.

Take, for example, the ability for a computer to identify objects in an image: in 2010, the state of the art was successful only about half of the time, and it was stuck there for years. Today, though, the best algorithms as shown in published papers are now at 86% accuracy.

That advance alone allows autonomous robots to understand what they are seeing through the camera lenses. It also shows the rapid pace of progress over the past decade due to developments in AI.

This kind of improvement is a true milestone from a technical perspective. Whereas in the past manually reviewing troves of video footage would require an incredible number of hours, now such data can be rapidly and accurately parsed by a computer program.

San Francisco became the first U.S. city to ban the use of facial recognition technology by police and other city agencies. This same technology can be coupled with drones, which are becoming more autonomous. AP Photo/Eric Risberg

But it also gives rise to an ethical dilemma. In removing humans from the process, the assumptions that underpin the decisions related to privacy and security have been fundamentally altered. For example, the use of cameras in public streets may have raised privacy concerns 15 or 20 years ago, but adding accurate facial recognition technology dramatically alters those privacy implications.

Easy to modify systems

When developing machines that can make own decisions – typically called autonomous systems – the ethical questions that arise are arguably more concerning than those in object recognition. AI-enhanced autonomy is developing so rapidly that capabilities which were once limited to highly engineered systems are now available to anyone with a household toolbox and some computer experience.

Commercial drones allow for many beneficial uses, such as delivering medicine or spraying for mosquitoes. AP Photo/Haroub Hussein

People with no background in computer science can learn some of the most state-of-the-art artificial intelligence tools, and robots are more than willing to let you run your newly acquired machine learning techniques on them. There are online forums filled with people eager to help anyone learn how to do this.

With earlier tools, it was already easy enough to program your minimally modified drone to identify a red bag and follow it.

More recent object detection technology unlocks the ability to track a range of things that resemble more than 9,000 different object types.

Combined with newer, more maneuverable drones, it’s not hard to imagine how easily they could be equipped with weapons. What’s to stop someone from strapping an explosive or another weapon to a drone equipped with this technology?

Using a variety of techniques, autonomous drones are already a threat. They have been caught dropping explosives on U.S. troops, shutting down airports and being used in an assassination attempt on Venezuelan leader Nicolas Maduro. The autonomous systems that are being developed right now could make staging such attacks easier and more devastating.

Reports indicate that the Islamic State is using off-the-shelf drones, some of which are being used for bombings.

Regulation or review boards?

About a year ago, a group of researchers in artificial intelligence and autonomous robotics put forward a pledge to refrain from developing lethal autonomous weapons. They defined lethal autonomous weapons as platforms that are capable of “selecting and engaging targets without human intervention.

” As a robotics researcher who isn’t interested in developing autonomous targeting techniques, I felt that the pledge missed the crux of the danger.

It glossed over important ethical questions that need to be addressed, especially those at the broad intersection of drone applications that could be either benign or violent.

For one, the researchers, companies and developers who wrote the papers and built the software and devices generally aren’t doing it to create weapons. However, they might inadvertently enable others, with minimal expertise, to create such weapons.

What can we do to address this risk?

Regulation is one option, and one already used by banning aerial drones near airports or around national parks. Those are helpful, but they don’t prevent the creation of weaponized drones. Traditional weapons regulations are not a sufficient template, either.

They generally tighten controls on the source material or the manufacturing process.

That would be nearly impossible with autonomous systems, where the source materials are widely shared computer code and the manufacturing process can take place at home using off-the-shelf components.

Another option would be to follow in the footsteps of biologists. In 1975, they held a conference on the potential hazards of recombinant DNA at Asilomar in California.

There, experts agreed to voluntary guidelines that would direct the course of future work. For autonomous systems, such an outcome seems unly at this point.

Many research projects that could be used in the development of weapons also have peaceful and incredibly useful outcomes.

A third choice would be to establish self-governance bodies at the organization level, such as the institutional review boards that currently oversee studies on human subjects at companies, universities and government labs. These boards consider the benefits to the populations involved in the research and craft ways to mitigate potential harms. But they can regulate only research done within their institutions, which limits their scope.

Still, a large number of researchers would fall under these boards’ purview – within the autonomous robotics research community, nearly every presenter at technical conferences are members of an institution. Research review boards would be a first step toward self-regulation and could flag projects that could be weaponized.

Living with the peril and promise

Many of my colleagues and I are excited to develop the next generation of autonomous systems. I feel that the potential for good is too promising to ignore.

But I am also concerned about the risks that new technologies pose, especially if they are exploited by malicious people.

Yet with some careful organization and informed conversations today, I believe we can work toward achieving those benefits while limiting the potential for harm.

[You can read us daily by subscribing to our newsletter.]

Источник: https://theconversation.com/robotics-researchers-have-a-duty-to-prevent-autonomous-weapons-126483

Drones That Kill on Their Own: Will Artificial Intelligence Reach the Battlefield?

Dangerous drones: Why flying robots are weapons and how to stop them

On a stage and addressing a crowded auditorium, an executive unveils an amazing advance: a tiny drone endowed with Artificial Intelligence (AI) that fits in the palm of his hand and is able to select its human target and fire a load of three grams of explosive into the brain. It’s impossible to shoot it down, its reactions are a hundred times faster than those of a human being, and one cannot escape or hide from it. When they fly in a swarm they can overcome any obstacle. “They cannot be stopped,” says the speaker.

Next on the scene are television news excerpts that advise of a lethal attack of these devices in the US Senate. A woman follows the news while chatting online with her son abroad.

The conversation ends abruptly when a swarm of drones strike the young man and other students who have shared a video on their social networks.

Finally the image returns us to the presentation, where the executive boasts of the possibility of selecting the enemy even by publishing a specific hashtag.

All this is just fiction from the short film Slaughterbots published by the Campaign to Stop Killer Robots, an initiative promoted by the International Committee for the Control of Robotic Weapons (ICRAC) and other entities.

But according to the warning at the end of the video from Stuart Russell, professor of computational science at the University of Berkeley (USA), this is more than just speculation: the technology already exists and soon these lethal autonomous drones could become a reality.

In fact, last November, the US Department of Defense opened a call for the development of “automatic target recognition of personnel and vehicles from an unmanned aerial system using learning algorithms.”

Drones capable of deciding for themselves

Armed drones have been on the battlefield for decades, but until now they have been simple devices that are controlled from a distance.

The Secretary of Defense of the United States Jim Mattis recently declared that calling current drones unmanned is a mistake, since they are at all times under the control of a human pilot.

The potential leap forward is profound: today the talk is about making devices the size of a domestic drone, capable of deciding for themselves and without human supervision who is to be attacked and then doing so.

According to what Paul Scharre, ex-special operations officer, former Pentagon adviser and author of the new book Army of None: Autonomous Weapons and the Future of War (WW Norton & Company, 2018), has told OpenMind that while “no country has stated that they intend to build fully autonomous weapons,” at the same time “few have ruled them out either.”

Scharre, who currently heads the National Security and Technology Program of the think-tank Center for a New American Security, warns that: “Many countries around the world are developing ever more advanced robotic weapons, including many non-state groups.” These advances may include varying degrees of autonomy, and for the expert the key question is whether the line will be crossed towards the total elimination of human control, “delegating life and death decisions to machines.”

Until now, military drones have been simple devices that are controlled from a distance. Credit: U.S. Air Force/ Kemberly Groue

However, there is no doubt that this technology is now accessible. And there is no shortage of those who believe that if states that respect the law abstain from developing it, they will be defenceless against its use by aggressor nations and terrorist groups.

Another question is whether AI ​​applied to warfare will end up being used. Some experts suggest that it could generate a deterrent effect that leads to a balance of power, as happened with the nuclear escalation during the Cold War.

But Scharre doubts the viability of this scenario, since nuclear missiles could be tracked via satellite; on the contrary, given that software gives autonomy to AI-based weapons, their surveillance is enormously complex. “The biggest challenge is the difficulty in verifying compliance with any kind of cooperation,” says the expert.

“This makes it very ly that nations will invest in autonomous technology, if nothing else fear that their adversaries are doing so.”

Avoid human errors and emotions

Those who support the development of autonomous military drones also point to their ability to avoid human errors and emotions, freeing current pilots from the moral responsibility of casualties, a position defended by robotics engineer Ronald Arkin at the Georgia Institute of Technology (USA).

However, in addition to the danger of suppressing any hint of humanity, other experts suggest that the refining process of all technology is fraught with errors, and in this case will result in deaths due to software bugs or errors in recognition.

What’s more, those companies and individuals that contribute to creating the necessary basic technologies may suddenly find themselves as potential military objectives.

For all of the above, organizations such as ICRAC advocate a “prohibition of the development, deployment and use of armed autonomous unmanned systems.

” According to what professor Steve Wright of the Politics & International Relations Group at Leeds Beckett University (United Kingdom) and a member of ICRAC, explained to OpenMInd, the objective of this entity is to demand from the United Nations a veto under the Geneva Convention on Certain Conventional Weapons (CCW). “The negative legal, political and ethical consequences of autonomous armed drones far outweigh any temporary military utility,” writes Wright. Last September, more than a hundred senior executives of technology companies signed an open letter urging the CCW to take action on it, although without explicitly requesting a veto.

Some organizations advocate a prohibition of armed autonomous unmanned systems. Credit: U.S. Navy/ Daniel J. McLain

“If current negotiations fail, we can anticipate these drones rapidly proliferating to both rogue states and non-state actors, including terrorists,” Wright warns.

The expert is aware that no prohibition will succeed in suppressing the risk, especially since a large part of the technologies involved are of civil development and are commercially available for other purposes, un the case of nuclear weapons.

However, Wright hopes that states and international collaboration can tackle the development and smuggling of these systems and their components.

At the last meeting of the CCW, held last November in Geneva (Switzerland), progress has been made, such as China’s opposition to autonomous weapons.

The awareness of the problem has penetrated sufficiently, writes Wright, to sign agreements aimed at preventing “a new era of push button assassination.” “Future generations will thank us when we succeed as we must,” he concludes.

Javier Yanes

@yanes68

Источник: https://www.bbvaopenmind.com/en/technology/artificial-intelligence/drones-that-kill-on-their-own-will-artificial-intelligence-reach-the-battlefield/

NEWS
Leave a Reply

;-) :| :x :twisted: :smile: :shock: :sad: :roll: :razz: :oops: :o :mrgreen: :lol: :idea: :grin: :evil: :cry: :cool: :arrow: :???: :?: :!: