Today, the European Parliament discusses the Delvaux report on “Civil Law Rules on Robotics“. Tomorrow, we will vote on it.
The report includes the call for a general discussion “on new employment models and on the sustainability of our tax and social systems” as robots continue to displace workers. Because the Legal Affairs Committee included calling for debate on the possibility of introducing a general basic income, conservatives are asking to delete the entire clause. It remains to be seen if the provision will survive tomorrow’s vote.
* * *
I want to highlight some other important elements of the report that were introduced because of efforts by my Greens/EFA colleagues and me.
Robotics and Artificial Intelligence don’t only come in the form of self-driving cars, heavy industry machines, or four-legged creatures that don’t trip when you kick them. Rather small devices such as medical implants or robotic prostheses today contain more and more complex algorithms. When a device replaces or enhances bodily functions, its users start relying on it.
For users of such devices, living autonomously, self-determinedly, and in dignity comes to depend on the functioning of these devices — and of course, the software that runs them. Like any software, the kind running on medical implants can have security vulnerabilities. Imagine if your senses or motor functions of your body could be tampered with – if you could be made to hear sounds that are not actually there or body parts stop acting the way you want them to. In the worst case, this can be a matter of life or death. People carrying implants rely on these devices like they rely on other body parts. That is why the law must ensure the provision of maintenance, repairs and enhancements, including software updates.
This kind of care must also be maintained if for some reason, a manufacturer should go out of business, or decide not to support a device any longer. At our suggestion, the report calls for the creation of independent trusted entities that manufacturers are obliged to supply with comprehensive design instructions, including source code.
The Dieselgate emissions scandal underlined that public authorities needs to be able to independently verify manufacturers’ claims. An understanding for how a device works, and the ability to hold the producer accountable in case of negligence, can only be achieved by giving authorities the possibility to reverse-engineer software and devices. The report stresses that the design and source code should be made available to authorities proactively so that they may understand the workings of a device.
* * *
The Greens/EFA Digital working group, which I am a member of, has produced a paper on our position on robotics and artificial intelligence (PDF) (ODT). Here are its 10 recommendations in short:
- An informed public debate. Society should be able to help shape technology as it is developing. Hence, public input and an informed debate is of the utmost importance. We call for a European debate with the aim of shaping the technological revolution so that it serves humanity with a series of rules, governing, in particular, liability and ethics and reflecting the intrinsically European and humanistic values that characterise Europe’s contribution to society.
- Precautionary principle. We demand that research and technology are integrated to the maximum benefit of all and potential unintended social impacts are avoided, especially when talking about emerging technologies. We propose that robots and artificial intelligence should be developed and produced based on an impact assessment, to the best available technical standards regarding security and with the possibility to intervene.
In accordance with responsible research and innovation, it is imperative to apply the precautionary principle and assess the long term ethical implications of new technologies in the early phase of their development. - Do no harm-principle. Robots are multi-use tools. They should not be designed to kill or harm humans. Their use must take place according to guaranteed individual rights and fundamental rights, including privacy by design and in particular human integrity, human dignity and identity. We underline the primacy of the human being over the sole interest of science or society. The decision to harm or kill a human being should only be made by a well-trained human operator. Thus, the use of robots in the military should not remove responsibility and accountability from a human. The deployments of robots and artificial intelligence should be in accordance with international humanitarian law and laws concerning armed conflicts.
- Ecological footprint. We acknowledge robotics and artificial intelligence can help shape processes in a more environmentally friendly way while at the same time emphasising the need to minimise their ecological footprint. We emphasise the need to apply the principles of regenerative design, increase energy efficiency by promoting the use of renewable technologies for robotics, the use and reuse of secondary raw materials, and the reduction of waste.
- Enhancements. We believe that the provision of social or health services should not depend on the acceptance of robotics and artificial intelligence as implants or extensions to the human body. Inclusion and diversity must be the highest priority of our societies. The dignity of persons with or without disabilities is inviolable. Persons carrying devices as implants or extensions can only live self-determinedly if they are the full owner of the respective device and all its components, including the possibility to reshape its inner workings.
- Autonomy of persons. We believe a person’s autonomy can only be fully respected when their right to information and consent are protected, including the protection of persons who are not able to consent. We reject the notion of “data ownership”, which would run counter to data protection as a fundamental right and treat data as a tradable commodity.
- Clear liabilities. Legal responsibility should be attributed to a person. Regarding safety and security, producers shall be held responsible despite any existing non-liability clauses in user agreements. The unintended nature of possible damages should not automatically exonerate manufacturers, programmers or operators from their liability and responsibility. In order to reduce possible repercussions of failure and malfunctioning of sufficiently complex systems, we think that strict liability concepts should be evaluated, including compulsory insurance policies.
- Open environment. We promote an open environment, from open standards and innovative licensing models, to open platforms and transparency, in order to avoid vendor lock-in that restrains interoperability.
- Product safety. Robotics and artificial intelligence as products should be designed to be safe, secure and fit for purpose, as with other products. Robots and AI should not exploit vulnerable users.
- Funding. The European Union and its Member States should fund research to that end in particular with regards to the ethical and legal effects of artificial intelligence.
To the extent possible under law, the creator has waived all copyright and related or neighboring rights to this work.