Joanna Diane Caytas, JD Candidate, Columbia Law School 2017
As robots, machine learning, and intelligent automation shape ever more areas of our lives and jobs, many assume that, because machines have been around since centuries, laws applicable to robots will just require some well-adjusted analogies. But they would be wrong, and the need for a Law of Robotics is becoming increasingly more evident. Yet, the ideas proposed to get this started are mired in controversy. Human labor has been substituted most effectively by machines for well over two centuries now, and quasi-Luddite warfare against technology has been around just as long. So long as mechanical weaving looms, assembly lines, and ATMs replaced expendable “downmarket” labor, technology was thought to bring progress and efficiency. Now, with college graduate and professional jobs increasingly at stake, panicked elites begin to voice concern. In part, that is myopic: Schumpeter’s “creative destruction” affected old money and old technologies just as it did feudal entitlements. Never before, though, has innovation threatened to substitute substantially all of human vocational activity and value added, as artificial intelligence (AI) and advanced robotics do now. To put it in terms legal minds understand more readily: we know a crisis is at hand when globally renowned law firms start to look seriously into replacing junior associates with AI, when cost-efficient practice without AI seems increasingly impossible, and robot judges are shown capable of doing the job. Will robots replace attorneys as we know them? The short answer is yes, at least “as we know them.” Wide areas of the medical field are not sheltered, either. Yet it is far from clear that net job losses will exceed net job gains created by AI in the robotics economy. Wolfgang Wahlster, head of the German Research Center for Artificial Intelligence, argues that, on the contrary, use of robotics in labor processes has actually reduced unemployment in Germany. In the U.S., a different and less optimistic perspective has evolved to date, especially reflecting its rivalry with China in robotics.
Robots in Contracts and Torts
Like previous incarnations of innovative technology, its adoption has not come flawlessly. In 2015, a robot crushed a human worker at a Volkswagen plant in Germany. A self-driving Tesla Model S on Autopilot technology caused the first fatal accident in Northern Florida. Risk of bodily injury rises with smart autonomous robots based on AI. Where foreseeability of damages is increasingly remote or even practically inexistent based on the current state of science and technology, tort systems such as the German and Austrian ones, that base employer liability on the intent of company representatives or supervisors, limit liability for damage claims to negligent employees. This solution, due to employees’ limited personal resources and a lack of mandatory insurance, is often unsatisfactory. Of course, product liability claims remain available in theory, but are subject to a range of limitations when it comes to liability for intangibles such as software and consequential damages caused by its flaws. For example, the faulty transmission of data caused negligently (or otherwise without criminal intent) by an employer may suffice to cut off product liability claims against the manufacturer. The concern is that self-learning systems of autonomous robots will become so unpredictable as to interrupt current legal standards regarding the chain of causation. For example, a manufacturer could successfully assert as a defense that it could not have known that the car would start chasing pedestrians on the sidewalk (possibly being “taught” bad driving habits by its owner).
Similar issues arise for contracts: a smart autonomous robot providing nursing care to a disabled human should be able to enter into valid contracts without risk of the counterparty disavowing it for lack of human involvement and a “meeting of the minds.” There are multiple ways to conceivably achieve the desired result, but current proposals have not been very persuasive.
Robots in Uniform: Military and Police
In July 2016, Dallas police set a precedent for extrajudicial killings by robot when it bombed a suspected cop killer by a remotely controlled land vehicle, a kind of “war zone operation” beyond war zones. Robotic warfare has been on the radar of realistic development options for at least a decade, and raises substantial concern about a robotic arms race. The UN will discuss a proposed ban of lethal autonomous weapons (“killer robots”) in 2017, as well as their multiple human rights implications.
EU Initiative for “Electronic Personhood”
On January 12, 2017, the European Parliament’s Legal Affairs Committee adopted 17/2, with two abstentions, a workshop report by Luxembourg MEP Mady Delvaux submitted on May 31, 2016, with recommendations to the Commission on Civil Law Rules on Robotics concerning liability for personal injuries to humans. It advanced the idea of creating a legal status of electronic personhood, distantly comparable to the legal personhood available for business organization, stating:
[T]hanks to the impressive technological advances of the last decade, not only are today’s robots able to perform activities which used to be typically and exclusively human, but the development of autonomous and cognitive features – e.g. the ability to learn from experience and take independent decisions – has made them more and more similar to agents that interact with their environment and are able to alter it significantly, whereas, in such a context, the legal responsibility arising from a robot’s harmful action becomes a crucial issue. (Q., at 5).
Furthermore,
the more autonomous robots are, the less they can be considered simple tools in the hands of other actors … this, in turn, makes the ordinary rules on liability insufficient and calls for new rules which focus on how a machine can be held – partly or entirely – responsible for its acts or omissions … as a consequence, it becomes more and more urgent to address the fundamental question of whether robots should possess a legal status. (S., at 5)
It is difficult to disagree more with a piece of legal innovation: legal personhood was created to limit liability by defining terms and provisions that reduce exposure to an organization’s assets while limiting personal liability of human actors. This cannot be the purpose of the European initiative, because such a solution already exists: conceivably, each robot could be the sole asset of a special purpose vehicle with legal personhood. As for holding robots “personally” responsible, they are still chattel owned and controlled by humans, in most cases through business organizations, but also increasingly by consumers. Rather than create a legal fiction that will prove difficult to delineate and require a great deal of time for satisfactory jurisprudence to materialize, the EU should apply the tested option of strict liability. Prior to a singularity event – at which point human-administered justice will have reached its end in the first place – it is unlikely that robots will “hold” assets of their own, making the imposition of liabilities illusionary. It is also extremely unlikely that specialized robots will be able to perform non-financial elements of reparation, compensation, indemnification, recompense, redress, or amends, especially for damages resulting from personal injury. It has been observed that no convincing argument has been, or arguably could be, made for the need for “electronic personhood”: “Blue whales and gorillas don’t have personhood but I would suggest that they have as many aspects of humanity as robots, so I don’t see why we should jump into giving robots this status.”
The notion of personhood is, of course, a red herring, a sideshow, a mere matter of convenience and legal fiction like corporate personhood and not a pathway to actually conferring “rights,” much less any “human rights,” as has been claimed. The use of the term was a bungling mistake to start with. The idea is actually about simplifying the accountability of corporations for robot-caused damages. Similarities with blue whales and gorillas aside, and much as it may currently seem like far-away science fiction, the report echoes Stephen Hawking’s warning of a combination of capitalism and automation resulting in emboldening a globalist oligarchy with bizarre levels of human inequality. But in light of the speed and unpredictability of technological development, Delvaux’ notions of creating a robust European legal framework for robots that will become available over the next 10–15 years by creating a European agency for robotics and AI (a proposal also made in the U.S.), defining and registering “smart autonomous robots” and enacting an advisory code of conduct for engineers promoting the ethical design, production, and use of robotics, is not entirely realistic, or at least predictably of limited consequence. By contrast, it would be feasible in the near-term to establish a mandatory insurance scheme covering corporate liability for robot-caused damage, and a reporting structure reflecting the contribution of robotics and AI to corporate financial results for redistributive taxation purposes (predictably, highly controversial in many quarters). This latter aspect leads to a particularly important part of the current debate.
A Robot Tax to Unconditional Basic Income?
Virtually all manufacturing and agricultural tasks, along with many service tasks performed today by humans, will be carried out by robots at some foreseeable point increasingly near in time. This will result in people making vocational choices that are no longer necessary for survival or creating tangible value, but that will increasingly emphasize creative and cognitive abilities.
Such changes in the working environment will have to result in new models of taxation. As robots and AI replace human labor from wide areas of value-creation, their cost of ownership will need to include taxes to support humans who no longer work to create material value – and may increasingly also be replaced in the creation of intellectual property. The concept of an unconditional basic income serves not only to forestall neo-Luddite human battles against technology, automation, and AI, but will, with time, have no alternative as robotics assume increasingly complex tasks. Thus, the cost of technology will be determined neither by the capital expenditures for devices and maintenance, nor by IP, but by replaced human labor – a political choice. It implies choices in IP law as well: if we extrapolate the notion of self-learning machine intelligence, it is inevitable that robots will make valuable and patentable inventions. Whose shall they be – the manufacturer’s, the owner’s, the software engineer’s, or society’s at large? A small body of precedent has thus far emerged in the U.S. regarding salvage and other rights secured (or violated) by robots as agents.
In Hawking’s words:
If machines produce everything we need, the outcome will depend on how things are distributed … Everyone can enjoy a life of luxurious leisure if the machine-produced wealth is shared, or most people can end up miserably poor if the machine-owners successfully lobby against wealth redistribution. So far, the trend seems to be toward the second option, with technology driving ever-increasing inequality.
It may be early for assumptions, absent adequately developed AI and robotics technology. But such considerations may also underestimate the political potential of still partly-dormant neo-Luddite movements. Although their attention was diverted to the fight against free-trade globalism in the 2016 U.S. election, they have shown their potential to shake up conventional wisdom and analytical predictions.
The Ominous Kill Switch
Few will contest the necessity of a tool preventing robots from killing people. Some form of “kill switch” – as it has long been introduced for the Internet in various countries and discussed in the U.S. – is among the least controversial Concepts to be legally mandated. However, specifics are not quite as obvious: what about the ability of the police to “shut down and stop” an autonomous vehicle involved in traffic violations or a crime, as ENLETS, the European Network of Law Enforcement Technology Services, has proposed (even though the initiative has received no follow-up to date)? What about the downing of a “formerly autonomous” hijacked plane likely to be used as a weapon?
Several reasons suggest justification of a kill switch: for example, to prevent information from getting into the wrong hands – a major data protection issue (yet, again, quis custodiet ipsos custodes – who shall guard the guardians themselves? Who decides whose hands are “wrong”?) – or to prevent AI or robots from going rogue, since biases can multiply very quickly in AI, as Microsoft’s experience with Twitter chatbots has shown.
Problems arise with standardization and control: unless a kill switch is quickly and easily accessible to anyone endangered, it may not be of much use. On the other hand, an accessible switch may leave the robot vulnerable to abuse, and therefore useless and unreliable. As Tenzin Priyadarshi, president of the Center for Ethics and Transformative Values at MIT, pointed out, smart algorithms are inherently dangerous when capable of superficially logically correct value judgments: “If there is an algorithm that decides humans are the most dangerous things on the planet, which we are proving to be true, the rational choice an AI can make is that we should destroy human beings because they’re detrimental to the planet.” This corroborates earlier warnings by, inter alia, Stephen Hawking and Elon Musk about human-level AI being “potentially more dangerous than nukes.” Patrick Lin, director of the Ethics and Emerging Sciences Group at California Polytechnic State University, regards kill switches as a no-brainer, assuming the control issue is capable of a consensus solution:
If it is possible to have conscious AI with rights, it’s also possible that we can’t put the genie back in the bottle, if we are late with installing a kill switch. For instance, if we had to, could we turn off the internet? Probably not, or at least not without major effort and time, such as severing underwater cables and taking down key nodes.
The Road Ahead
The European Parliament’s draft initiative envisions a mandatory kill switch. However, with the adopted committee report now under consideration by the full parliament, it is still unclear when it will be submitted to the Commission and, if the Commission should seize the matter, what any final legislation might look like after deliberations (likely to take at least one year). Still, the Delvaux report has sounded a clarion call at the outset of 2017 on the normative necessities and legal consequences of technology that emerge at greater speed than traditional democratic processes may be capable of accommodating – at least without being outdated by the time of enactment. Other aspects aside, without effective checks and balances, and time-efficient judicial review procedures, the internet’s predicted “huge clash with Europe” as a result of AI is almost certain to occur.
Featured Image Source: http://cdn.images.express.co.uk/img/dynamic/151/590x/EU-Bot-682759.jpg