Provident Investment Management
books.jpg

News & Insights

 

Artificial Intelligence: Skynet or More of the Same?

 

Artificial Intelligence has burst on the scene in 2023, paced by OpenAI, L.P.’s release of ChatGPT (Chat Generative Pre-trained Transformer) last November and Microsoft’s further $10 billion investment in OpenAI that will support incorporating Artificial Intelligence (AI) into current and future products. Investor enthusiasm has been somewhat bubble-like as companies viewed to be on the cutting edge of AI have been rewarded with rich valuations that will only be justified if AI produces profits. It seems that every company has jumped on the AI bandwagon. I can hardly get through the first few minutes of company quarterly earnings or conference presentations without hearing about how they use AI or plan to do so in the future.

For non-investors, I’ve seen and read several articles and blogs speculating that ChatGPT and its further development will eventually lead to Skynet, the fictional conscious-mind AI from the Terminator movies that launched a global nuclear holocaust to destroy its enemy, humanity.

Is Skynet destiny or just entertaining Hollywood fiction for Sci-Fi geeks like me?  I decided to learn more about the history of Artificial Intelligence through the appropriately named book, A brief history of artificial intelligence: what it is, where we are, and where we are going. The book was published in 2020 and written by University of Oxford Professor Michael Wooldridge. Professor Wooldridge earned his Ph.D. in AI in 1991 and has published more than 400 scientific articles on the subject in his more than 30 years in the field.

I found the book entertaining and very informa­tive. Professor Wooldridge does an excellent job describing abstract computer concepts and tech­niques with clear language and informative examples. He wrote the book with two goals. The first is to describe what AI is and isn’t. Something like Skynet, an AI that is self-aware, conscious, and autonomous, what he calls “the grand dream”, isn’t currently possible because researchers don’t remotely understand how to create the mechanisms that provide people with intelligence. He explains that the mainstream of AI research today is focused on getting machines to do specific tasks that currently require human brains (and potentially human bodies) for which conventional computing techniques provide no solution. The second goal of the book is to tell the story of AI and what the future might hold.

There are many possible starting points for AI as various historical figures going back to the 1600s speculated on the possibility of thinking machines. For Professor Wooldridge, the beginning of AI coincided with the beginning of computing and has a clear starting point:  King’s College, Cambridge, in 1935, by a brilliant young math student, Alan Turing. Eventually securing a Ph.D., Turing utilized mathematical concepts to build simple machines designed to solve a particular math problem, like finding prime numbers. Eventually these machines would be used for various mathematical tasks, and the phrase Universal Turing Machines describes what we know today as the first computer. Turing is best known for utilizing his math machines during World War II to break coded German communications.

As early computers were built in the 1940s and 1950s, researchers began to debate if machines could “think”, attracting huge public interest. To silence doubters, Turing published the article “Computing Machinery and Intelligence” in 1950, introducing what is now called the Turing Test. If a human interacting with a computer in textual conversation can’t tell if the answers received are indistinguishable from a person, then it must be deemed “intelligent”, regardless of how achieved. Researchers have poked holes in Turing’s Test as there is much more to consciousness than fooling a human being.

While not evident at the time, the Turing Test was the first major contribution to the field of AI and ushered in a golden age that lasted from 1956 to 1974. Researchers believed if they could architect the component capabilities of intelligent behavior it would be simple to put them all together to create intelligence. These include perception, the ability to get information and act on it from a particular environment. Machine learning, the ability to learn through experience. Problem solving and planning, the ability to achieve a goal using a given set of actions. Reasoning, the ability to derive new knowledge from existing facts. Finally, natural language understanding involves computers interpreting human languages. Systems created during this period with names like SHRDLU, STRIPES, and SHAKEY were interesting, but showed just how hard it was to implement these basic capabilities that are extremely complex and further held back by the limited processing speeds of the day.

Given the lack of progress academic institutions and governments began to criticize AI researchers, eventually cutting research funding and ushering in the first “AI Winter” from the early 1970s to the early 1980s. However, during this time researchers figured out that AI was missing a key ingredient:  knowledge. Hence a new class of knowledge-based AI systems emerged, expert systems, that used human knowledge to solve specific, narrowly defined problems. Based on pre-programmed rules, expert systems took off and institutions began to use them in practical ways, like MYCIN that helped doctors with diagnosis of human blood diseases and R1/XCON that configured VAX computers for Digital Equipment Company. Today complex expert systems are ubiquitous.

Over the 1980s expert systems became mature and there were few AI breakthroughs, ushering in the second “AI Winter”. However, in the late 1980s a new branch of AI formed called Behavioral AI. Its founder, roboticist Rodney Brooks, rejected the idea that an AI can be built as component capabilities. Brooks believed that AI systems needed to interact with their environment, reacting to the situation that the system finds itself in and the behavior that exhibits it. Brooks founded iRobot, teaching its early vacuum cleaning robot only six behaviors and giving it one task: sweep the floor. Behavioral AI was born.

AI researchers took Behavioral AI and added reasoning capabilities, creating software agents:  logical programs that act on our behalf. Agents were limited in the 1990s, requiring the advancement of processing speed and data storage. However, since 2000 agents have become prolific, even suggesting the full word as I type just the first few characters. If you have an iPhone and use Siri or an Alphabet device that utilizes Alexa, you are interreacting with an agent.

In 2014, interest in AI surged when Alphabet (then Google) acquired DeepMind, a UK-based AI start-up employing fewer than 25 people, for $650 million. DeepMind’s expertise is machine learning, a subset of AI that has been progressing on its own track for many years. DeepMind married machine learning with neural networks, a system of interconnections of nodes with layers that mimic the structure of the human brain. This became known as deep learning. But deep learning relies on two other ingredients:  data and processing power. Today’s computers support millions of nodes, store large data sets of information and are lightning fast. Google search has been utilizing deep learning for several years.

Deep learning is responsible for many recent AI advances. Large Language Models utilize deep learning techniques that understand human language and provide answers from data, much like ChatGPT. AI software can also generate images like those from Adobe Firefly’s AI. Deep learning is being married with advances in sensors, leading us toward driverless cars.

New AI capabilities are marvels, but Professor Wooldridge explains that these implementations are nowhere near the complexity necessary to create actual, conscious intelligence. Further, since we each perceive independently, we don’t always agree with others when experiencing the same event. Computers don’t have emotions, a key ingredient in perception and the ability to support human intuition. No one has figured out ways to solve these problems and we are a long, long way off from even partial answers.

As we try to separate the market’s hype and reality, AI will continue to evolve incrementally like it has done for many years. It is our job as investment managers to select the companies that figure out how to use AI to make money by gaining competitive advantage with customers. I’m far more concerned about that aspect of my life than looking over my shoulder for Skynet.

Dan Boyle, CFA