Blog

Robots, robots everywhere!*

3 May , 2016  | by: and

We were among the lucky ones chosen to present at the 2016 We Robot Conference and for both of us, it was one of the best conferences we have ever attended. We Robot 2016 took place at the University of Miami and was hosted mainly by Professor Michael Froomkin. From the organization to the speakers: everything was amazing! We won’t address every topic discussed at the conference but we will give you a taste of the topics and point you to some interesting readings in this area. Interested parties should also check out the We Robot website and consider applying for next year’s edition, taking place on March 31 & April 1 at Yale University.

 

What is a robot?

As a primary theme of We Robot 2016, there were some discussions about what a robot is and how robotics is different from other technologies. Probably the safest and broadest definition to date is that a robot is a machine designed by humans that combines different technologies, such as sensors, cameras, hardware, and that is coupled with artificial intelligence (AI) and machine-learning software. Different features make a robot distinct from other IoT-technologies including: AI, the robot’s ability to learn and make decisions, their versatility (i.e. the fact that a robot can act), and interactivity. Interactivity means that the robot interacts first with its physical environment and then with humans. Indeed, robots have an interface for humans to connect with and can share a kind of mental relationship with humans, creating a special bond between the machine and the human.

Now, why does the question “What is a robot” matter? It matters because the definition leads to qualifications… and qualifications are important from a legal perspective, such as the question of liability discussed below.

 

Who is responsible for a robot’s actions?

Of course, the question of who is liable when robots cause damage came up during the conference. Imagine a scenario where a robot learns from its user how to do something illegal.  Should the robot’s manufacturer be liable or does the user bear some kind of liability?

A recent example shows how bots on Twitter can be influenced by humans towards the illegal and the immoral. Microsoft’s now infamous chatbot “Tay” was taken down 24 hours after being released because, taught by Internet trolls, “Tay” quickly began spouting extreme views and inflammatory hate speech. As WIRED nicely headlined, “It’s Your Fault Microsoft’s Teen AI Turned Into Such a Jerk” because “Tay” was trained to mimic how teenagers talk based on online data. Stefan Kulk has also recently reported this story right here on The iii.

Going beyond “Tay”, we can reflect on how robots which live closely with individuals are likely to mimic them and learn from them. Thus, the robots’ actions will mimic their users’ actions more than what they had been originally programmed to do. For such cases, legal scholars at We Robot 2016 pointed to the deficits of strict liability regimes.

Screenshot of .

 

Nooo, my robot saw me naked! How can I make my robot respect my privacy?

Another important topic of discussion during We Robot 2016 was privacy and how robots might change the ways we think about privacy.

In contrast to other technologies, robots have the potential to be more intrusive. They can enter rooms at will (bathroom, bedroom, etc.) and record what they “see”. People might forget that their little “eyes” are actually cameras filming their environment and storing the captured data. Not only the mobility of robots but also the bonds between humans and robots might impact how we think about privacy and sharing of private information in the future.

How should we address such issues? How can we educate individuals about how to share information with robots? Is there even a need to worry about privacy and take actions in this respect? Solutions in this field are still missing. One idea is to create a new profession that is trained to deal with such questions. In a previous contribution, we have called for robo-code ethicists to bring this topic on the agenda and negotiate between the engineers’ and regulators’ rationality and think about privacy protection at an early stage in the creation and adoption of robots.

 

Why are all robots white? #ethics #racism?

During We Robot 2016, the participants experienced both the bright and dark sides of robots. They saw that “good” robots protect the environment by monitoring and mapping out the submarine world. However, they were also confronted with the frightening vision of Robocop.

Peter Asaro’s fascinating paper, which draws on the context of #blacklivesmatter, outlined how robots – as well as other technologies, such as face detection software – cannot be neutral and are necessarily biased. It is thus not a good idea to design a police robot (Robocop) in order to solve the problems with, for instance, police violence against ethnic minorities in the US. There are many indicators that Robocop would be a bad rather than a good cop. Firstly, technological hurdles exist. For example, it is currently impossible for a robot to interpret the subtleties of human behavior well enough to be a good police(wo)man). Secondly, ethical dilemmas emerge. To prevent robots from harming people, a robot might be refused the right of self-defense – a decision limiting the robot’s potential as a sanctioning force. Lastly, social issues, such as the question of accountability, would need to be answered.

Beyond these, many other topics were discussed, including whether robots have the right to freedom of expression! If you are interested, please read the papers published on the We Robot 2016 website or the new book on Robot Law!

 

*Robots, robots everywhere! Is actually a title of a  with quite a good rating. Check that out too!

, , , , , ,


Leave a Reply

Comments RSS Feed