Artificial intelligence has a long and rich history spanning seven decades. What’s interesting is that AI existed even before modern computers, with research into intelligent machines being some of the starting points for how we got into digital computing in the first place. Early computing pioneer Alan Turing was also an early AI pioneer, coming up with ideas in the late 1940s and 1950s. Norbert Wiener, creator of cybernetic concepts, developed the first autonomous robots in the 1940s, when there weren’t even transistors, let alone big data or the cloud. Claud Shannon developed hardware mice that could solve mazes without the need for deep learning neural networks. W. Gray Walter is known to have built two autonomous cybernetic turtles who, in the late 1940s, could navigate the world around them and even find their way back to their loading point without coding a single line of Python. It was only after these developments and the subsequent coining of the term “AI” at a Dartmouth Convention in 1956 that digital computing really became a thing.
Given all of this, with all of our amazing computing power, unlimited internet and data, and cloud computing, surely we should have realized the dreams of AI researchers that had us orbiting planets with autonomous robots and intelligent machines envisioned in 2001: A Space Odyssey, Star Wars, Star Trek and other science fiction developed in the 1960s and 1970s. And yet our chatbots today are not much smarter than those developed in the 1960s, but our image recognition systems are satisfactory still can’t see the elephant in the room. Are we really achieving AI or are we falling into the same traps over and over again? If AI has been around for decades, why are there still so many challenges in adopting it? And why do we keep repeating the same mistakes of the past?
AI sets its first trap: The first AI winter
To better understand where we are currently with AI, you need to understand how we got here. The first major wave of AI interest and investment occurred in the early 1950s to early 1970s. Much of the early AI research and development came from the burgeoning fields of computer science, neuropsychology, brain science, linguistics, and other related fields. AI research builds on exponential improvements in computing technology. Combined with funding from government, academic, and military sources, this led to some of the earliest and most impressive advances in AI. However, as advancements surrounding computing technology continued to mature and advance, the AI innovations being developed in this window almost ground to a halt by the mid-1970s. AI promoters realized they were not achieving what was expected or promised from intelligent systems, and it seemed like AI was a goal that would never be reached. This period of dwindling interest, funding, and research is known in the industry as the first AI winter, so named because researchers were chilled by investors, governments, universities, and potential customers.
AI showed so much promise, so what happened? Why don’t we live like the Jetsons? Where’s our HAL-9000 or Star Trek computer? AI has created fantastic visions of what could be, and people have promised that we’re right around the corner to make those promises a reality. However, these excessive promises were answered with an underdelivery. While there were some great ideas and tantalizing demonstrations, there were fundamental issues of overcoming difficult problems that were challenging due to the lack of computing power, data, and research understanding.
The problem of complexity became an ongoing problem. The very first natural language processing (NLP) systems and even a chatbot called Eliza Chatbot were developed in the 1960s during the Cold War era. The idea of having machines that could understand and translate text held great promise, especially for the idea of intercepting cable communications from Russia. But when the Russian language was put into an NLP application, it came back with incorrect translations. People quickly realized that word-for-word translations were just too complex. These ideas weren’t entirely grandiose or entirely impossible, but when it came down to development, things became a lot more difficult than they first thought. Time, money and resources have been diverted to other endeavors. The promise of what AI could do, and then failing to deliver on that promise, got us into our first AI winter.
Caught Again: The Second AI Winter
Interest in AI research was rekindled in the mid-1980s with the development of expert systems. Expert systems adopted by corporations took advantage of the emerging power of desktop computers and affordable servers to do work previously assigned to expensive mainframes. Expert systems helped the industry automate and simplify decision-making on Main Street and beefed up electronic trading systems on Wall Street. Soon the idea of the intelligent computer was seen gaining ground again. If it could be a trustworthy decision maker in the company, surely we could have the intelligent computer back in our lives.
The promise of AI increasingly looked positive, but in the late 1980s and early 1990s, AI was still viewed as a “dirty” word by many in the industry due to its previous failed capabilities. However, the growth of servers and desktop computing has reignited potential interest in AI. In 1997, IBM’s Deep Blue beat chess grandmaster Garry Kasparov. Some people thought we made it, intelligent machines are making a comeback. In the 1990s, companies looked for more ways to introduce AI. However, expert systems proved very vulnerable and companies were forced to be more realistic with their money, especially after the dot.com crash that happened at the beginning of the millennium. People realized that IBM’s Deep Blue was only good for playing chess and was not a portable application for other solutions. These and other factors led us to the second AI winter. Once again, we overpromised and underdelivered what AI was capable of, and AI became a dirty word again for another decade.
The Thawing of Winter: But Are Storm Clouds Gathering?
In the late 2000s and early 2010s, interest in AI thawed again, fueled by new research and lots and lots of data. In fact, this latest AI wave really should be called the Big Data/GPU computing wave. Without these, we would not have been able to address some of AI’s previous challenges, particularly those related to deep learning-based approaches that drive a very large percentage of the latest wave of AI applications. We now have a lot of data and we know how to effectively manage this large amount of data.
This current AI wave is clearly data-driven. In the 1980s, 1990s, and early 2000s, we figured out how to build large, queryable databases of structured data. But the nature of data began to change, as unstructured data like email, images, and audio files quickly made up the majority of the data we created. A key driver of this current wave of AI is our ability to handle massive amounts of unstructured data.
When we managed to do that, we hit a critical threshold with neural networks capable of incredible performance and suddenly anything seemed possible. We saw this massive boom where AI was able to find patterns in this sea of unstructured data, leveraged predictive analytics for recommendation systems, NLP applications like chatbots and virtual assistants took off, and product recommendations became incredibly accurate. With so much advancement in such a short amount of time, people still get caught up in the idea that AI can do anything. Another aspect that has helped get us to where we are today is venture capital. AI was no longer funded solely by governments and big corporations. Venture capital funding has allowed startups focused on AI to thrive. The combination of lots of money, lots of promises, lots of data and lots of hype has warmed up the AI environment. But are we preparing for another iteration in which we overpromise and fail to deliver on the capabilities of the AI? The signs point to yes.
The problem with AI expectations
The problem of overpromising and underdelivering is not a problem with any particular promise. It’s the implicit promise. People have this promise in the back of their minds of what AI can do and when. For example, most of us want Level 5 fully autonomous vehicles, and the potential for this application is huge. Companies like Tesla and Waymo and others sell their systems based on their promise and users’ dreams. But the reality is that we are still a long way from fully autonomous vehicles. Robots are still falling down escalators. Chatbots are smarter but still pretty dumb. We don’t have an AGI. It’s not that big thinking per se is the problem, it’s the fact that small innovations are crafted into game-changing disruptions and before you know it we’ve once again over-promised and are under-performing. And when that happens, you know what inevitably comes next: an AI winter.
Businesses today go to great lengths to deliver on their promises. One way to deal with this is to reduce the scope of those promises. Do not promise that AI systems will be able to diagnose patients based on medical image data when it is clear that these systems are not up to the task. Don’t promise fully autonomous vehicles when collision avoidance could be an easier task. Don’t promise amazingly intelligent chatbots that fail at basic interactions. The key to managing expectations is to set those that offer good ROI and meet those expectations without over-promising the world or sentient machines.
The reason why more sober, agile and iterative approaches to AI such as Cognitive Project Management Methodology for AI (CPMAI). are adopted is because organizations have a lot at stake with their AI projects. Failing this time may not be an option for companies that have put so much time and effort into their AI projects. We run the risk of repeating the same mistakes and getting the same results. For AI to become a reality, we need to become more realistic in our expectations, which we can do incrementally to make those visions a reality. Otherwise, we fall back into the same old AI story of over-promising and never delivering on AI promises.