Select Page

What Makes a Robot?

What Have I Been Reading?

Subscribe via Email

If you enjoy my ramblings, then you can sign up using your email address and get notified whenever I post!

Join 372 other subscribers

by | Jan 18, 2019

One of the very first tasks in my Robotics module is to answer the question “What Makes a Robot”. I would love to dive deeper into this at a later date, when I am further ahead in the module. However, I wanted to jot a few first thoughts to use as something to reflect on later in my course. Firstly, I subscribe to the definition given by my lecturer that a robot “is a machine that can sense its environment, process information from its sensors and other internal information, decide what to do next and execute that decision.” However, what makes a robot is a question that can be far more weighted. I’ll start with the picture at the top of my notes to keep this light hearted. 

If you subscribe to Asimov’s “Three Laws of Robotics” robots are governed by three major laws that their architecture (or programming) will not allow them to disobey; 

  1. A robot may not injure a human being, or, through inaction, allow a human being come to harm 
  2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws 

-Asimov, 1996, p.8 

These laws make sense, by programming them in this way Asimov ensures that robots aren’t running around committing heinous crimes, murdering humans and self destructing when they feel like it (that could be expensive). However, programming them in this way plays into the fact that the very idea of a robot was dreamt up as a sort of mechanical slave. Asimov’s stories subscribe to this idea: in one of the first short stories, older robot models (some of the very first human resembling robots) reply to each command with “Yes, Master” or “No, Master”. It is explicitly pointed out by one of the characters ,who mentions that the first models were given a “slave complex”. This idea is central even to the very name “robot” which comes from the Czech “robota“, literally translated as “forced labour” and the Old Slavonic “ràbota which, translated, is a derogatory term meaning drudgery/slog/an unskilled menial job. (The term is attributed to Karel Capek and his 1920’s play Rossum’s Universal Robots.) 

 

Side Note: Interestingly, as well as being the first use of “Robota” science fiction has played a very important role in the continuing advancement of robotics and Asimov’s novel was the first (or so the Oxford English Dictionary has us believe) to coin the term “robotics”. 

 

Asimov’s novel is captivating, and I am only a quarter of the way through! The laws, stated earlier, have an impact on the machines in the novel when they come upon conflicting issues. In one story, a robot is commanded (Second Law) to retrieve an object. However the closer to the object the robot gets, the increase in danger to itself (Third Law) and therefor it retreats to a safe distance, when the Second Law kicks in again and he attempts to retrieve the object again. Again, the Third Law forces him to retreat and so on. The robot is stuck in a loop (developers, the dreaded infinite loop) and at risk of short circuiting (or the technical term for breaking). To break the infinite loop, the First Law must be used as this is the Law with the greatest weight (or for those CSS lovers out there, the greatest specificity). 

 

Although fictional, and we are far from the sophisticated level of robotics in the book (apologies Asimov!) these laws remain relevant today as we try to manoeuvre this new and emerging field: how should we consider regulating robotics in the future? For example, although your first thought may be that human-like robots should [of course] have human like traits, such as knowing not to kill another human, should [or is it necessary that] non-humanoid robots, such as a robotic vacuum cleaner or robotic lawnmower, have these same laws in their architecture? These household robotics can be classed as autonomous because they can make decisions for themselves. For example, if there is a cat in their way they know to go around it ( I am simplifying at this point) and, therefore, are not controlled by a human during their set task. Taking into account that lawnmowers currently (not being autonomous, i think the term could be argued to be heteronomous?) kill approximately 69 Americans annually, maybe it is personal opinion, but I think the autonomous versions would benefit from these laws in their architecture. I would definitely feel more comfortable if my autonomous fridge/freezer/knife rack/hoover/lawnmower had been programmed NOT to kill me [as its strongest law]. 

 

I am absolutely captivated by this module so far and am looking forward to diving deeper into robotics. For some, I realise that, at this point, I have a very basic understanding of the complexities of robotics and the challenges currently faced. However, everyone must start somewhere and sometimes it is nice to have fresh, non-biased and naive eyes to look at problems? 

Grab a copy here! : Amazon Link