“I need to find a babysitter for my new little software robot before going out tonight.” At this point, that statement may sound stupid, but it seems to be more and more relevant to stop newly created software robots from getting in trouble when left alone on the internet. Last year Microsoft created the chat robot Tay, and asked everybody to communicate with it to help developing it. Many Twitter users did their best to teach the robot about the world. At least as they wanted the robot to see it. In other words, they made the robot a Hitler loving machine being very disrespectful to women. All in less than 24 hours, whereafter Microsoft grounded their newly created baby for its rude behavior. This year Facebook as well had to shut down an artificial intelligence system, because two software robots developed their own language, which could not be understood by humans.

The developers clearly blame their robots for performing unintended activities, like if they were not as intelligent as presumed. But is that really the case, or could it be that the robots are even more clever and human like than we think? Looking at intelligence as the ability to act appropriate to external impacts, the twitter robot did its job quite well. The creators just forgot to make the robot able to discriminate between appropriate and non-appropriate behavior. Instead, it was left with all the social media trolls, trying to screw it up. Also, the Facebook robots only had each other to learn from, which they did and were even intelligent enough to create their own language. An important key takeaway from that is that when robots are able to perform completely unpredictable activities, it means we can be inspired and get new ideas from them. To make that work, the creators need to be role models for the machines, instead of releasing uncontrolled robots just to turn them off, when they go out of control.

Developing a robot can be compared to raising a child. In the tentative beginning, it needs to explore a limited part of the world from a digital playpen containing only simple toys to teach it the basic stuff. Always under the supervision and clear guidance on how to behave, so potential misunderstandings are being corrected by humans. As the robot develops its algorithms it moves to the sandbox, where it explores more parts of the world, like playing and learning from other robots. Still under supervision and corrections from humans. I believe that taking the responsibility for a good and educational “robot childhood” will lead to a well-behaving and interesting robot, which we actually want to listen to and learn from. Even at this point, robots need laws and guidelines on how to behave, so we do not get robot crime, committed by “grown-up” robots, that should know better.

A few general laws of Robotics were established more than 70 years ago, but have never been more relevant than of today. However, the creators still need to teach their robots all the fundamentals, before they can expect their machines to be able to understand robot-oriented legislation.