Can we trust robots when they can decide human life or death?

With the development of artificial intelligence and machine learning,robotIncreasingly taking on more important jobs, such as surgery, autonomous driving, battlefield operations, and even life and death for humans, so – can we trust Robots? According to the public understanding of robots, they are either near-perfect The loyal butler of themechanicalAva from Ji, Robbie from Confinement Planet, or Hal from A Space Odyssey. Although these descriptions merely reflect human’s own hopes and fears, at least these fears have begun to come into reality.

Can we trust robots when they can decide human life or death?Display: block; max-width: 100%; margin: 0px auto; word-wrap: break-word !important;” alt=” Can we trust robots when they can decide human life and death? ” />

Four-arm surgical robot

Surgical robot Leonardo da Vinci. ?

Ken Goldberg, a robotics expert at the University of California, Berkeley, has a da Vinci robot that can sew surgical incisions, and da Vinci can learn on his own through “machine learning” software.

Offensive drones and shooting robots

American Magic Claw SWORDS Robot

In recent years, many celebrities in the tech world have expressed their concerns about the rampant artificial intelligence: robots with superintelligence have built a new world in which humans are no longer the protagonists, and they are enslaved, killed or even exterminated. These horror scenes are not much different from the fantasies of science fiction writers decades ago, but they have attracted a lot of attention, even including Stephen Hawking, Bill Gates, Elon Musk and other tech celebrities.

Humans and machines are forming a new relationship.In the near future, we will begin toautomationof Robotic systems entrusted with tasks such as driving cars, performing surgeries, and even choosing when to use lethal weapons in war. This is the first time humans have made life-or-death decisions by programming machines rather than directly controlling them in the face of complex, changing and disordered environments.

There is no doubt that robots are not perfect, they make mistakes and can even cause human casualties. But what is certain is that this emerging technology will bring unprecedented development prospects to mankind. By then, we will face challenges ranging from technical, regulatory, and even philosophical, and in addition to code and policy issues, the new robots will force us to face deeper ethical dilemmas and even change our perceptions of ourselves. But in the end, the world will be a better place because of the age of robots.

Precise and efficient surgical robot

At present, surgeons can use robotic arms to perform complex operations. Michael Steffman, director of the Center for Robotic Surgery at New York University, has performed thousands of robotic-assisted surgeries. He manipulates the robotic arms at the console. Each robotic arm is inserted into the patient’s body through a tiny incision about 5 mm wide. As long as he rotates his wrist and squeezes his fingers, the robotic arm inserted into the patient’s body will perform precise execution. Do the same.

He guides two arms to tie a thread, manipulates the third arm to pass a needle through the patient’s kidney, sew together the hole left after the tumor was removed, and the fourth arm holds an endoscope to pierce the patient’s abdominal cavity The situation inside is presented on the Display.

Stiffman is a highly trained specialist with great skill and judgment, however, he is spending his precious time stitching. This is just a follow-up to the main surgery, and if a robot can take over this monotonous mechanical task, the surgeon can free up his hands for more important things.

Today’s surgical robots have further enhanced capabilities, not only without hand tremors during surgery, but also to implement a variety of programs. But at the end of the day, robots are just advanced tools controlled directly by humans. Dennis Fowler, executive vice president of surgical robotics company Titan Medical, who has been a surgeon for 32 years, believes that if robots can automatically make some decisions instead of humans and perform tasks assigned to them independently, the medical industry will be able to better serve humanity. “This technological intervention increases reliability and reduces human error.”

“Promotion” for robots is not out of reach, most of the technology needed will soon be researched andindustryDeveloped in the experiment, the experimental robot uses rubber models of human tissue to practice suturing, cleaning wounds, and removing tumors. In some experiments, robots can match humans, and some are even more precise and efficient than humans. Just last month, a Washington hospital showed a robotic system sutured small intestine tissue from pigs, and a human performed the same procedure for comparison and found that the robotic sutures were more uniform and finer.

While these systems are not yet ready for use in patients, they represent the future of surgery. The same logic applies to operating rooms and assembly lines, and if high automation improves work performance, nothing can stop it.

Obesity doctor and lecturer at Imperial College London, Hudan Ashraffin, studied the outcomes of robotic surgery. He believes that in the foreseeable future, surgical robots will handle simple tasks on the orders of doctors. “Our goal is to improve postoperative outcomes. If using a robot can save lives and reduce risks, then using this kind of equipment is obligatory.”

Going forward, the medical community will eventually use the next generation of robots – artificial intelligence with decision-making power. The robot can not only handle routine tasks, but can also take over entire operations. While it may seem unlikely right now, technological innovation will naturally lead people there. Ashraffin said: “This is achieved step by step, although each step is not particularly large. Just as doctors 50 years ago could not imagine what it would be like to be in the operating room, 50 years from now, it is estimated that it will be another A scene.”

In fact, surgical robots have been able to make their own decisions, and their independence is stronger than people think. For example, in vision correction surgery, robotic systems cut a small piece of a patient’s cornea and reshape its inner layer through a series of laser pulses; in knee replacement surgery, autonomous robots cut bone more precisely than doctors; in hair transplant surgery , The intelligent robot can identify and collect the firm hair follicles on the patient’s head, and then make precise holes in the scalp of the bald spot, saving doctors a lot of monotonous and time-consuming repetitive labor.

Those procedures involving the thoracic, abdominal and pelvic regions present more complex challenges, as each patient’s physiology is different and autonomous robots must be able to identify a variety of wet and soft internal organs and blood vessels well ; And in the operation, the internal organs of the patient may change, so the robot must be able to continuously adjust the operation plan.

Robots also need to be able to handle crises reliably. For example, in the case of sudden massive hemorrhage during tumor resection, it must be dealt with promptly and correctly. There will be various unpredictable and complex situations during surgery, and robotic imaging systems and computer vision must first be able to identify and indicate the severity of the situation by squirting red liquid; The best solution to the problem is then instructed to quickly put this solution into action; finally, the evaluation process evaluates the results and determines whether further action is required. Getting surgical robots to master every step of perception, decision-making, action, and evaluation represents a huge challenge for engineers.

Enter the practical “Da Vinci” system

In 2013, the California-based company Intuitive Surgery began donating the “Da Vinci” surgical robotic system to robotics researchers at universities around the world at a cost of up to $2.5 million each. It is also a soft-tissue surgery system approved by U.S. regulators. At present, more than 3,600 hospitals around the world have installed the “Da Vinci” system. But its business road has not been smooth, and it has faced lawsuits over minor accidents. Despite the controversy, many hospitals and patients have embraced this technology.

“Da Vinci” was under the complete control of a human doctor, whose arms were just lifeless pieces of plastic and metal unless the doctor grabbed the joystick at the console. For now, they intend to keep it that way, said Simon Di Maio, the company’s advanced systems research and development manager. But roboticists are working toward the future, allowing doctors to have more and more aids or computers to guide their operations.

Di Maio pointed out that research in this area is like the early days of self-driving cars, “the first step is to recognize road signs, obstacles, cars and pedestrians,” and the next step is to have the car help the driver. For example, smart cars can sense the location of surrounding vehicles and alert drivers when they change lanes by mistake. Likewise, surgical robots can warn doctors when their surgical instruments deviate from their usual path.

Ken Goldberg, director of the Laboratory for Automation Science and Engineering at the University of California, Berkeley, is also training his “da Vinci” to perform surgical tasks independently. At present, its suturing technology is quite dexterous. It can pull the thread through both sides of the model wound with one arm, and the other arm pulls the needle to tighten the thread, and starts to sew the next stitch without human guidance. It calculates the best position of the needle in and out each time through position sensing and camera technology, and plans and tracks the trajectory of the needle. But the task is still daunting, and it is currently reported to only be able to complete 50% of the four-stitch program without tying the thread.

Now, Goldberg says, they use machine-learning algorithms to collect visual and kinematic data, divide each stitch into multiple steps such as positioning, pushing the needle, and let “Da Vinci” process it in sequence. Through this method, it is possible to learn to do any surgery.

In theory, the same procedure could guide a real surgery. Goldberg believes that simple surgical tasks can be automated within the next 10 years. But even if the robot does perform better in routine surgery, he wants the robot’s actions to be “autonomous” under the supervision of a human doctor. He said that letting robots do precise and consistent work over long periods of time is like sewing machines to hand sewing, and only robots can work with humans to become super doctors.

Military robots redefined

In 1920, Czech writer Karel Capek published the sci-fi play “Russum’s Universal Robot”, inventing the term “robot”, which originally means synthetic human beings who work hard for long hours in factories with low production rates. cost commodity. In the end, however, these robots killed humans. That’s what science fiction has always been about: a robot spins out of control and becomes an unstoppable killing machine.

Now, with the development of artificial intelligence and the advancement of robotics, coupled with the widespread use of drones and ground robots in the wars in Iraq and Afghanistan, people are beginning to worry that the fears in science fiction will become reality.

The world’s most powerful militaries are currently developing smarter weapons that will have varying degrees of automation and lethality, the vast majority controlled remotely by humans. But others believe that future artificial intelligence weapons will eventually operate fully automatically, with integrated microchips and software to determine human life and death, which will become a watershed in the history of warfare.

As a result, the “robot killer” has sparked a heated debate: one side believes that robots may start a world war to destroy human civilization; the other side believes that these weapons are a new type of precision-guided weapons that will only reduce rather than increase casualties. Some leading researchers in the field of artificial intelligence have called for a ban on “autonomous offensive weapons beyond human control.”

Last year, three academic luminaries—Stuart Roselle of the University of California, Berkeley, Marcos Tegmark of the Massachusetts Institute of Technology, and Toby Walsh of the University of New South Wales, Australia—in an artificial intelligence (AI) AI) organized a joint petition at the conference. In their open letter, they pointed out that these weapons would lead to a “global AI arms race” for “assassination, destabilization, repression of the masses, and selective elimination of a minority group.” The letter has now exceeded 20,000 Signed by people, including famous physicist Stephen Hawking, Tesla CEO Elon Musk and others. Musk also donated $10 million to a Boston-based institute. The institute’s mission is to “defend life” against possible malicious artificial intelligence. This incident has become the news of major media around the world, and even at the United Nations Conference on Disarmament in Geneva in December, more than 100 countries participated in the discussion of this issue.

The debate also extends to the web. Various predictions and outlooks have been made about the future, and some people believe that there may also be a “low-cost mass black market for the sale of lethal micro-robots, which buyers can set certain standards and kill indiscriminately”. Kill thousands of qualified people.”

The three also noted: “Autonomous weapons are a potential weapon of mass destruction. While some states may not choose to use them for this purpose, it is extremely attractive to some states and terrorists .”

To build killing machines that are constantly upgraded in intelligence, autonomy and automation, no matter whether this arms race can better serve the interests of mankind, and no matter how controversial there is at present, a new round of AI arms race has actually begun.

The Quietly Emerging Intelligent Weapons and Equipment

Autonomous weapons have been around for decades, but in small numbers and mostly for defensive purposes. More recently, military suppliers have developed autonomous weapons that are considered offensive. Israel Aerospace Industries’ Harpy and Harop drones can fly at radio waves emitted by enemy air defenses, crashing them. The company says the drone has been widely sold around the world. South Korean defense contractor DoDAAM Systems has also developed a “Super aEgis II” guard robot, equipped with a machine gun that uses computer vision to automatically detect and shoot targets within 3 kilometers. The robot-equipped weapons have been tested in the demilitarized zone on the border with North Korea, the South Korean military reported. DoDAAM Systems says it has sold more than 30 of these systems, with some buyers in the Middle East. At present, the number of such autonomous systems greatly exceeds that of robotic weapons.

Some analysts believe that weapons will become increasingly autonomous in the coming years. “War is going to be completely different, automation is going to play a big role, and speed is key,” said Peter Singer, an expert on robotic warfare at The New America, a nonpartisan research organization in Washington, D.C. He predicts future war scenarios. Like a dogfight between drones, or an encounter between a robotic warship and an enemy submarine, weapons that provide an instant advantage will decide the outcome of the war. “It may be a high-density direct confrontation, and it is too late for humans to intervene, because everything has happened in only a few seconds.”

The U.S. military has detailed plans for this new type of warfare in a roadmap for unmanned systems, but its intentions for weaponization remain vague. Defense Secretary Robert Walker emphasized at a forum last March that investments in AI and robotics are to be made, saying that more and more automated systems on the battlefield are inevitable.

Asked about autonomous weapons, Walker insisted that the U.S. military “does not delegate the right to make lethal decisions to a machine.” But he also added that if “competitors” would prefer to give such power to a machine, We also had to decide, how best to compete.

In developing unmanned combat systems for land, sea and air, Russia follows the same strategy, but at least for now relies on human operations. Russia’s M platform is a small remote-controlled robot equipped with a Kalashnikov rifle and grenade launcher, similar to the American “SWORDS” system (a ground robot equipped with M16 and other weapons). Russia has also built a larger unmanned combat vehicle, the Uran-9, with a 30mm cannon and anti-tank missiles, and last year demonstrated a humanoid warrior robot.

The United Nations has insisted on discussing lethal autonomous robots for nearly five years, but its member states have yet to draft a deal. In 2013, the UN Special Rapporteur on human rights, Chris Haynes, wrote an influential report stating that countries around the world have a rare opportunity to discuss the risks of autonomous weapons before they develop on a large scale.

In December, the United Nations Convention on Certain Conventional Weapons will hold a five-year review conference, and the topic of lethal autonomous robots will be put on the agenda, but it is impossible to pass a ban at the conference. This decision requires the unanimous consent of all participating countries. There are still fundamental differences among countries on how to deal with the issue of pan-autonomous weapons that may appear in the future.

Ultimately, the “killer robot” debate seems to be more about humans than robots, and autonomous weapons, like any other technology, should be used with greater caution, at least for the first time, or they can be chaotic and disastrous. A question like “Are autonomous combat robots a good idea?” might be less appropriate, but a better one would be “Are we sure we can trust robots enough to live with them?”

Self-driving cars that balance logic and ethics

Imagine that one night in the future, a drunk pedestrian suddenly falls in front of a driverless car and is killed on the spot. If someone was in the car, it would be considered an accident, because the pedestrian was clearly at fault, and it would be difficult for a rational driver to avoid it in time. But in the 20s of this century, with the increasing popularity of driverless cars, the probability of car accidents will be reduced by 90%, and the legal standard of “rational people” corresponding to driver’s fault will disappear, and the corresponding standard is to “rational robots”.

The family of the deceased will take the automaker to court, arguing that the car was too late to brake, but it should have avoided pedestrians, crossed the double yellow line, and hit the empty driverless car in the lane next to it. And according to the analysis of the car’s sensor data, the reenactment of the collision scene is also true. Plaintiff’s lawyers will ask head car software designer: “Why doesn’t the car turn?”

But a court would never ask a driver why he didn’t take some emergency action just before a crash, because it would be pointless – the driver was too panicked to think and acted on instinct. But if the driver is a robot, it makes sense to ask “why”.

Amid human moral standards, flawed code cases, and all sorts of hypothetical situations that engineers can’t imagine, the most important assumption is that a man of good judgment knows when to disregard the written law and actually uphold the law Spiritual supremacy. What engineers must do now is teach self-driving cars and other robots some basic judgmental factors.

Currently, in parts of the UK, Germany, Japan, four US states and the District of Columbia, the law has explicitly allowed the testing of fully automated vehicles, but with a test driver in the car. Google, Nissan, Ford and others have also said they expect to have truly driverless operations within the next five to 10 years.

Automated vehicles obtain environmental information through a series of sensors, such as video cameras, ultrasonic sensors, radar and lidar (laser ranging). In California, applying for a test license for an autonomous vehicle provides the DMV with all sensor data 30 seconds before a crash, which engineers can use to accurately reproduce crash scenarios. Using the car’s sensor records, the logic behind its decisions can be inferred. After a crash, regulators and lawyers were able to rely on those records to insist that autonomous vehicles had superhuman safety standards and passed rigorous scrutiny. Manufacturers and software developers will also justify the behavior of driverless cars in ways currently unimaginable for human drivers.

All driving involves risks, but how those risks are distributed among drivers, pedestrians, cyclists, and even the nature of the risks have an ethical component. What matters most, both to engineers and the general public, is the decision-making system of an autonomous vehicle, which determines the moral component of the car’s behavior.

For morally ambiguous situations, minimize losses while following the law. This tactic is appealing because it allows developers to justify the “inaction” of the culprit, and also passes on the lawmakers the responsibility to define ethical behavior. Unfortunately, the laws in this area are not yet perfect.

In most countries, the law relies on human common sense, and self-driving cars are programmed to obey the law: never cross double yellow lines, even if there is a risk of hitting a drunk, even if there are only empty drivers in the other lane Cars – It is difficult for the law to make exceptions for emergencies, and car developers have no way of determining when it is safe to cross the double yellow line.

This is not an unsolvable problem in the ethics of road vehicle automation. Similar risks and benefits can be handled safely and reasonably by drawing on numerous cases from other fields. For example, donating organs to patients, depending on whether it will bring a better quality of life or a disabled life, has also added some occupations that are more useful to society. A bigger challenge for self-driving vehicles is that they have to make quick decisions with incomplete information that programmers often don’t take into account, using rigid ethics coded into the software.

Fortunately, the public doesn’t expect superhuman intelligence unduly, and given the complexities of ethics and morality, it’s enough to have a reasonable judgment about the behavior of autonomous vehicles. A solution does not have to be flawless, but it should be considered and reasonable.

The Links:   3HAC026225-002 3BSE038415R1 ELECTRONIC