Robotics IRL vs. in science fiction
Robotics is an interdisciplinary scientific field that has been hailed for years now as being amongst the peak technologies of the future. For many years, robotic figures dominated children’s hit cartoon television series (“Transformers”, “Voltron”) and movies, the most well-known being the “Star Wars” and “Terminator” series. Hollywood has used robots in innumerable movies, in different roles. One of the most important roles of robots in Hollywood is to boost ticket sales.
The most prominent example of a recent robots-related movie is the “adaptation” of Isaac Asimov’s classic book “I, Robot”. It is not unusual for Hollywood to recycle and mutilate book plots to create box office hits, and in using a subject as hyped-up as robots, one would be naïve to expect a robotics movie in 2004, starring Will Smith as the luddite protagonist, to be deeper than the aforementioned cartoon series. In general, the movie’s main similarity to the book is the title, and further similarities are mostly names of persons or products featured in the original book. In the movie, the audience gets to see robots, the robot psychologist Dr. Susan Calvin, Asimov’s omnipresent, over-rated and over-quoted “Three Laws of Robotics”, the company U.S. Robotics, and yet more robots, all mixed up and blatantly interspersed with spontaneous advertisement of modern-day products ranging from chocolate milk powder (Ovaltine) to cars (Audi).
Of course, in a film based on Asimov-modeled robots, one can expect to see the “Three Laws of Robotics” described as being the ultimate truth of the universe. It is of no wonder that these laws have become an annoying cliché of science fiction nowadays, as ignorance on the current state of real-life robotics often leads to misinterpretation of their importance. The three laws state:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Although these sentences are described as “laws”, they are actually rules that override each other in the order presented. For instance, an Asimov-modeled robot (a robotic machine following this set of rules) would check the applicability of the third rule in a given situation. It would then recursively check the other laws and in the end obey the one best suited to the solution of the problem. How the robot actually determines the validity of these rules is vaguely explained in Asimov’s books via the “positronic brain” that each robot features, without providing further information on the actual logic paths followed by the “positronic brain” in order to evaluate whether a human being is in harm or not.
This is to be expected; after all, our civilization is still very far from solving the most complex problems in the scientific field of Artificial Intelligence, that would enable machines to think “out of the box” and evaluate such statements. In fact, we might still have decades or even centuries before such a machine becomes as commonplace as many sci-fi books and movies would want us to believe. In a strange, visionary way however, the optimistic way in which advanced robotics are pictured in sci-fi provides the goals that we would like to reach through scientific research, alongside various misconceptions of what exactly a robot is. The Merriam-Webster Online Dictionary defines “robot” as:
- a : a machine that looks like a human being and performs various complex acts (as walking or talking) of a human being; also : a similar but fictional machine whose lack of capacity for human emotions is often emphasized b : an efficient insensitive person who functions automatically
- a device that automatically performs complicated often repetitive tasks
- a mechanism guided by automatic controls
Interestingly, most of the science fiction available prefers the 1a definition of “robot”; in other words, it favors the image of a robot as a mechanical construction that is capable of demonstrating a subset of human functions. Unfortunately, science fiction remains exactly that, science fiction, because robots are defined as simply mechanisms guided by automatic control (definition 3) in but a handful of science fiction books. This is a great omission in the context of the “robot” construction, because it leads to confusion as to what a robot should be able to do.
A robot is not fully defined by either of these descriptions alone. In fact, robotics scientists argue even nowadays for the “correct” definition of a robot, although the main trend is to define a robot as a dynamic mechanism that responds to input by using automatic control structures, yet sometimes looks similar to a human being. The similarity to us is kept in the definition as a possibility, since otherwise, even the latest and greatest washing machines could be characterized as “robots”. One fact that escapes the attention of people who have never worked with robots however, is that washing machines do employ more sophisticated control systems than most mobile robots touted as “robotics experiments” in various academic programs around the world, and yet, we never see or never want to see washing machines and similarly complex electric appliances as “robots”. We rather group them under the more abstract group of “mechatronics”.
Mechatronics is a superset of robotics. A robot is always a mechatronic product, but not all mechatronic products are robots. Additionally, robots should display a similarity to humans in their “behavior”, be it talking, walking, or any other function that makes a human distinct from a lifeless machine. In fact, there has already been a test of human similarity for machines for more than 40 years. The Turing Test, proposed by Alan Turing in the 1950s is a simple test, in which a human leads a conversation in natural language with two counterparts, without the ability of seeing the nature of each counterpart. One is another human, the other is a machine imitating the human being. The machine’s similarity to a human is judged by its ability to pass the test, meaning the incapability of the human to judge which one of the two discussion participants is the machine.
Whether passing the Turing Test demonstrates little more than the ability of a common programmable machine, such as a computer, to cheat its way through a natural-language conversation with a human is debatable, as various programs such as Joseph Weizenbaum’s ELIZA have tricked humans into believing they talk to an actual human being more than once. Surely, natural language understanding isn’t all that is required by a robot to feel human-like.
Behind the glossy images painted by science fiction, a robot is (or rather, should be) a jumble of complicated control systems. From the point of view of the scientific field of control systems, the robot is an abstract mechanical system. It can be given a command to execute a task. The reaction of the robot is to do something to its environment. Its action can by affected by physical parameters such as its position in space, the air pressure or temperature and of course, by the general state of the environment, since the environment has an effect on the robot itself. The corrective input to the robot resulting from the changes in the surrounding environment is called “feedback”. Ideally, the robot would be fully autonomous, meaning that it would be able to completely regulate itself, give itself its own inputs and excitations. Pragmatically, what today’s robots do, is wait for the first “starting kick” as an external command and then go into a loop which checks sensors, enables actuators and does other random actions that affect its environment. Because of this lack of self-impulse, real-life robots are semi-autonomous.
A robot that is semi-autonomous though, retains its ability to fake its similarity to a human being by mimicking various functions that are specific to our species, such as listening and talking or other functions that are basic enough to be taken for granted, such as movement.
One of the most noble goals of artificial intelligence is providing robots and in general machine with the ability to learn. The field of machine learning has displayed some progress, and as many other scientific fields has enjoyed its share of hype in the past. One such notable technology that relates to machine learning and specifically, pattern recognition, are “artificial neural networks” (ANNs), complex interconnection patterns between simple, “dumb” singular units, that when associated correctly can function as “black boxes” that demonstrate functions ranging from basic mathematical operations to judging whether a product sample fulfills specific geometric requirements. Although ANNs were seen as a very futuristic concept in the 1980s, after some experimentation it became apparent that they are mostrly a solution looking for a problem, and as such scientists have long ago turned to other approaches to machine learning. Despite the fact that robots are portrayed as the ultimate learning machines in science fiction (“My CPU is a neural net processor. I’m a learning computer.” – Terminator in the movie “Terminator 2: Judgement Day”), today’s robots are still so primitive, that incorporating the ability to learn should be among the least of the robot developer’s worries.
Other robotic imitations have been possible for quite some time now. As mentioned earlier, basic conversation with a computer has been available since the 1970s. When combined with today’s acceptable (but far from perfect) speech synthesis programs, it can provide the image of a conversing computer/robot. Elementary speech synthesis is such a toy concept nowadays, that even operating systems such as Apple’s Mac OS X, incorporate it into the environment as a weird, futuristic and definitely non-essential way of delivering information to the user, or as an accessibility option for hearing-impaired computer users. Speech recognition in computers is being actively developed by many companies around the world, and there are already various software packages available to facilitate speech recognition as a means of text input to a computer. In a few words, putting together a robot that at least can speak and “listen” like a human is possible with minimum effort.
The most far-fetched idea in science fiction is the ability of robots to experience feelings. The idea of feelings breaches the definition of a robot, for once a robot is advanced enough to feel, then what properties can be used to differentiate it from a human being? Such philosophical questions will not be upon us for many centuries, if not millenia, to come.
In summary, despite what science fiction predicts will be the immediate future, real life proves the genre extremely optimistic in its assumptions and predictions. Instead of super-intelligent, gigantic robots with ultra-powerful laser weapons, we have to deal with “intelligent vacuum cleaners” (iRobot), “robotic” dogs/gadgets (SONY’s Aibo) and other products that claim to be a lot more advanced than they actually are. Today’s state-of-the-art robots are usually little more than mechanical structures equipped with a computing unit and a bunch of sensors. The built-in computer goes through a loop all the time, checking sensors, driving around, activating levers and pulleys to produce changes on the robot’s surroundings. Even seemingly trivial tasks, such as navigating around a room without colliding with obstacles, are still complicated enough to be valid discussion material for numerous academic papers published every year on the research field of mobile robotics.
Of course, there are exceptions to this multitude of relatively cumbersome robots. The most impressive robots are definitely NASA’s Mars Exploration Rovers nicknamed “Spirit” and “Opportunity”. Operating pretty far from our well-known environment, these robots have to face adverse conditions and be able to handle unexpected situations. Upon closed examination however, one can see that they have no similarity whatsoever to the “other” robots, those common in science fiction. For one, featuring six wheels each, they are not exactly human-like, but this is understandable, since upright walking is very complex and not very effective (see Honda’s ASIMO walking robot). They are actually very feature-rich vehicle platforms, upon which robotic arms, complex wheel retention mechanisms and data acquisition systems have been placed. Their mission? Impressive indeed. Their function as autonomous robots? Impressive in terms of capabilities, state-of-the-art in terms of implementing the complex control structures required to fulfill the Mars exploration tasks, nevertheless trivial in terms of what this actually is; this is simply not the “robots” that science fiction “promised” we would have in the 00s.
In a future far far away, maybe we’ll get around to having humanoid robots that can walk, talk, listen and learn. Hopefully, they will not keep bumping on walls.
<Patrician|Away> what does your robot do, sam
<bovril> it collects data about the surrounding environment, then discards it and drives into walls