“A Brief History of AI”

1872: Author Samuel Butler’s novel “Erewhon” toyed with the idea that at an indeterminate point in the future, machines would have the potential to possess consciousness.
By Scott Hamilton
I know I have written a lot about modern Artificial Intelligence (AI), but I just came across a copy of an old magazine from 1989 dedicated to AI and neural networks. It made me rethink some things on AI and I decided that you might be interested in knowing some of the history. As it turns out, just like general computing, very little has changed in the last few decades around AI research. You may find it hard to believe, but AI was an idea in myth and legend as far back as Greek mythology; Talos was a bronze creature acting as a guardian for the island of Crete. According to legend he would throw boulders at invading ships. It is really unclear if Talos ever really existed as a kind of weapon or was just a story.
If we jump forward a few decades we find medieval legends of artificial beings; the Swiss alchemist Paracelsus describes his procedure for fabricating an artificial man in “Of the Nature of Things.” In more modern works of fiction in the early nineteenth century, we get introduced to Frankenstein’s monster in Mary Shelly’s famous work. AI remains a popular topic in science fiction books and movies today. Very early attempts at AI were Automata, which were realistic looking humanoid machines. The oldest of these were the sacred statues of ancient Egypt and Greece, which were believed to have been imbued with real minds capable of wisdom and emotion.
It was not until between 1230 and 1315 that we see the first logical machines come on the scene through the work of Spanish philosopher Ramon Llull. Llull believed that logic could be mechanically determined and so he felt it possible to build a mechanical device capable of making logical decisions. Gottfried Leibniz followed his works and redeveloped his ideas. Leibniz believed logical thought could be described by mathematical logic and mathematical logic could be mechanically calculated. He was not really too far from the truth and his ideas drive part of what we see as modern AI.
Building on this prior work, Godel Turing and Alonzo Church worked together to come up with the Church-Turing thesis, which implied that a mechanical device, shuffling symbols as simple as 0 and 1, could imitate any conceivable process of mathematical deduction. It was their thesis that led to the modern computer; granted we manipulate these binary (0,1) symbols electrically instead of mechanically. They also proved that there were limits to what mathematical logic could accomplish, but more importantly proved that everything that could be done mathematically could also be automated, which was a very important step for modern AI.
The earliest research into “Thinking” machines was prevalent between 1930-1950, when it was discovered through neurology research that the brain was an electrical network of neurons that fired in all or nothing processes, showing that the mind could be represented by a Church-Turning computer process. In 1950 Alan Turing published his famous work describing the Turing test in his landmark paper “Computing Machinery and Intelligence.” He defined “thinking” as being able to carry out a conversation (over a teleprinter) that was indistinguishable from a conversation with a person, then the machine was “thinking.” We are very close today at reaching a successful Turning test result. So if all these ideas existed as early as 1950, why has it taken this long to accomplish the goal?
There are a couple of reasons; the first was what we now call the First AI Winter. Between 1974 and 1980 research was nearly completely halted because the hype behind the technology was not reached and funding was pulled from many projects. Early researchers over-promised and under-delivered, and as a result the scientific research grant funding went to other projects. This was mainly because in the early 1970s, AIs could only handle trivial versions of the problems they were meant to solve. Only part of the problem was a lack of computing power; the bigger issue was a lack of data.
As computers became more powerful in the early 1980s there was a resurgence in AI research; the more powerful computers allowed the AI technology from the 1970s to work a little better. They could load more information into the computer and run more powerful applications. New symbolic computer languages made it possible for AI to solve more logical problems, but some of the promise was behind perception, learning and common sense, which still seemed difficult to achieve. I would argue that the modern AIs imitate these tasks, but cannot really achieve perception and learning. It was during the 1980s that simulation of small neural networks first became practical, unfortunately once again researchers fell short of achieving the promised goals and we entered a second AI winter in the 1990s. The resurgence in the early 2000s came about as the internet grew.
In the early 2000s the amount of information available to train the neural networks saw exponential growth and today we finally have enough digitally-stored and labeled information to feed massive neural networks and we can finally accomplish the dream goals of the AIs from the 1970s and 1980s. The winter is over and AI is making a strong impact on society, but if you look at the history closely, there were no new developments. We just finally had enough data to feed the models and accomplish the goals. Until next week, stay safe and learn something new.
Scott Hamilton is an Expert in Emerging Technologies at ATOS and can be reached with questions and comments via email to shamilton@techshepherd.org or through his website at https://www.techshepherd.org.