We’re still far off seeing robots of the like depicted in films such as Ex Machina (or even I, Robot), but the field of robotics and AI is constantly evolving, with new milestones being reached all the time. Here’s a refresher on some of the biggest landmarks in the development of robotics up until the present day.
1942 – Isaac Azimov sets out his Three Laws for Robotics
Our concept of robots as humanoids originates in science fiction, namely that of American author Isaac Asimov. His Three Laws of Robotics were codified in 1942, and state: A robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
1943 – Neural networks are introduced
Artificial neural networks (based on the neural networks in the human brain) are computing systems designed to simulate the way the human brain processes information. So far so high tech, but these elements – crucial to the most advanced AI today – are actually based on a mathematical formula published in 1943. Written by Warren McCulloch and Walter Pitts, the paper, entitled “ A Logical Calculus of the Ideas Immanent in Nervous Activit,” couldn’t have been more revolutionary in the development of computer science. The first neural network was developed in 1951 by Marvin Minsky and Dean Edmonds. Named SNARC (the Stochastic Neural Analog Reinforcement Computer), it was made from vacuum tubes, motors and clutches and was tasked with helping a virtual rat solve a puzzle.
1948 – William Grey Walter’s ‘turtle’ robots
William Grey Walter developed what are considered the first electronic autonomous robots called machina speculatrix. The turtle-like robots were named Elmer and Elsie and were capable of way-finding to reach a charging station when their battery levels were low. Walter’s work paved the way for BEAM (Biology, Electronics, Aesthetics and Mechanics) robotics, which do not require the computation power of a microprocessor.
1950 – The Turing Test
This was followed, in 1950, by Alan Turing‘s eponymous test. This asks: “Are there imaginable digital computers which would do well in the imitation game?” The imitation game is the basis for the Turing test and depends on the human ability to judge if a computer is human or not based on verbal reasoning. Despite the test being criticised for its simplicity, it has become influential in the way we think about artificial intelligence.
1954 – George Devol files for Unimate patent
Norman Heroux, George Devol and Joe Engleberger designed and marketed the first programmable robot arm – called Unimate – and sold it to General Motors in 1960. This paved the way for industrial robots to complete repetitive, difficult or dangerous tasks.
1966 – The first incarnation of Siri – ELIZA – was developed
Today, we find chatting to natural language processing programmes like Alexa and Siri a part of everyday life. But back in the ’60s, voice assistants were a little more revolutionary. ELIZA was one of the first of these, created by a professor at the MIT Artificial Intelligence Laboratory between the years 1964 and 1966. The programme could carry out a conversation via text by following a ‘script’ that directed it on how to respond.
1966 – SRI develops Shakey the robot
“Shakey” the robot was designed by the Stanford Research Institute between 1966 and 1972 and was a landmark in robotics due to its blending of hardware and software so that it could perceive its surroundings. Shakey brought robots into the public consciousness after receiving wide-ranging media attention.
1969 – The idea behind ‘backprop’ is proposed
‘Backpropagation’ may not sound like much, but it’s actually the most important algorithm in the evolution of AI. In layperson’s terms, backpropagation allows a neural network to adjust its layers in the case that the outcome achieved by its computations isn’t the objective it’s aiming for. This means that the system can learn through trial and error to continue to refine its output in a similar way to a young child learning how to do something. It works using statistics – each time the system results in an error, the probabilities are adjusted and a new course of action is attempted. Although the idea which became the foundation of backprop was first proposed in 1969, it wasn’t frequently incorporated into machine learning until the mid ’80s.
1969 – NASA lands on the moon
NASA used cutting edge computing and robotics technology to land humans on the moon for the first time in 1969.
1977 – Star Wars in released in theatres
George Lucas popularised humanoid robots with the release of Star Wars: A New Hope and the creation of two of the most famous robots to date: R2-D2 and C-3PO.
1986 – First self-driving car goes for a test drive
Self-driving cars are still not mainstream today, so it might be surprising to learn that the fist autonomous car took itself for a drive in 1986. The Mercedes-Benz van incorporated mirrors and sensors and was able to drive successfully on empty streets. The project was carried out by researchers at Germany’s Bundeswehr University. Since then the technology used in self-driving cars has continued to improve.
1996 – Deep Blue defeats Garry Kasparov at chess
In May 1997 IBM’s robot Deep Blue beat the world chess champion Garry Kasparov in a match. It had beaten Kasparov in a single game in 1996. Read next: Five times AI has beaten humans in competitions: AlphaGo, Chinook, IBM Watson and more: Computers vs humans
2004 – IBM starts work on Watson
IBM starts work on Watson, its next iteration on Deep Blue. Watson made headlines in 2008 when it defeated humans at the quiz show Jeopardy! which requires complex understanding of natural language. Watson defeated former champions Ken Jennings and Brad Rutter in 2011.
2015 – Robot demonstrates self-awareness
In 2015, a (frankly adorable) robot demonstrated self-awareness in a ‘wise-men’ logic puzzle. In the version of the puzzle used with the robots, all three robots were programmed to believe that two of them had received a “dumbing pill”, making them mute. When the experimenter asked them which of them hadn’t taken the dumbing pill, one robot said “I don’t know” out loud. After hearing itself speak, however, it quickly changes its answer, as it realises that it has not received the pill. The robot’s behaviour demonstrates that it recognises itself as a separate entity from the other two robots, and therefore indicates a certain level of self-awareness. This task was previously used as a ‘sifting test’ to discern a human from a robot, because up until this point, robots hadn’t been able to pass it.
2016 – AlphaGo defeats Lee Sedol at Go
On March 15 2016 AlphaGo, an AI system built by UK company DeepMind, defeated the world champion Lee Sedol at the ancient board game Go. This was a major milestone for DeepMind’s research into creating artificial intelligence that can ‘learn’ how to solve problems regardless of the context, unlike Deep Blue which is programmed for a specific use case.
2017 – AlphaGo Zero learns Go in three days with no help
A newer version of AlphaGo, AlphaGo Zero, learned to play Go by itself in just three days after only being told the rules. Previous versions have learnt how to play the game by training themselves up on thousands of games played by professionals. However this version simply played itself over and over millions of times, starting by placing the stones on the board at random but quickly learning winning strategies. This is exciting because it indicates that AI can create knowledge on its own with very little human direction. And this has much more valuable applications than beating humans at board games. Right now, the programme is tasked with working out how proteins fold, a finding that could revolutionise drug discovery.