When might we meet the first intelligent machines

0

Were you unable to attend Transform 2022? Check out all Summit sessions in our on-demand library now! Look here.


How close are we to living in a world where human intelligence is surpassed by machines? Throughout my career I have regularly engaged in a thought experiment in which I attempt to “think like a computer” to envision a solution to a programming challenge or opportunity. The divide between human thinking and software code has always been pretty clear.

Then a few weeks ago, after several months of chatting with the LaMDA chatbot, now “former” Google AI engineer Blake Lemoine said he’d had a think LaMDA was sentient [subscription required]. Two days before Lemoine’s announcement, Pulitzer Prize-winning AI pioneer and cognitive scientist Douglas Hofstadter wrote an article stating [subscription required] that artificial neural networks (the software technology behind LaMDA) are unaware. He also came to this conclusion after a series of conversations with another powerful AI chatbot called GPT-3. Hofstadter ended the article by assessing that we are still decades away from machine consciousness.

A few weeks later, Yann LeCun, the chief scientist at Meta’s artificial intelligence (AI) laboratory and winner of the 2018 Turing Award, published a paper entitled “A path to autonomous machine intelligence.” He shares in the article an architecture that goes beyond consciousness and sensation to propose a way to program an AI with the ability to reason and plan like humans. Researchers call this artificial general intelligence, or AGI.

I think we will look at LeCun’s paper with the same reverence that we have today for Alan Turing 1936 paper which described the architecture for the modern digital computer. Here’s why.

incident

MetaBeat 2022

MetaBeat will bring together thought leaders on October 4th in San Francisco, California to provide guidance on how Metaverse technology will transform the way all industries communicate and do business.

Register here

Action simulation based on a world model

LeCun’s first breakthrough is to envision a way beyond the limitations of today’s specialized AI with his concept of a “world model.” This is made possible in part by inventing a hierarchical architecture for predictive models that learn to represent the world at multiple levels of abstraction and across multiple time scales.

With this world model we can predict possible future states by simulating courses of action. In the paper, he states, “This can allow for analogy by applying the model configured for one situation to another situation.”

A configurator module to drive new learning

This brings us to the second major innovation in LeCun’s paper. He notes: “One can imagine a ‘generic’ world model for the environment, where a small part of the parameters are modulated by the configurator for the task at hand.” He leaves open the question of how the configurator learns to transform a complex task into one break down a sequence of sub-goals. But that’s basically how the human mind uses analogies.

For example, imagine waking up in a hotel room this morning and needing to operate the in-room shower for the first time. You probably quickly broke the task down into a series of sub-goals, using analogies learned from operating other showers. First determine how to turn the water on with the handle, then confirm which way to turn the handle to make the water warmer, etc. You could ignore the vast majority of the data points in the room to focus on some focus on a few that are relevant to these goals.

Once started, all intelligent machine learning is self-paced

The third major advance is the most powerful. LeCun’s architecture is based on a paradigm of self-supervised learning. This means that the AI ​​is able to learn by itself by watching videos, reading texts, interacting with humans, processing sensor data or processing any other input source. Most AIs today must be trained on a diet of specially labeled data prepared by human trainers.

Google’s DeepMind just released one public database produced by their AlphaFold AI. It contains the estimated form of nearly every 200 million proteins known to science. It used to take researchers 3-5 years to experimentally predict the shape of just “one” protein. DeepMind and AlphaFold AI trainers finished nearly 200 million within the same five-year window.

What does it mean when an AI can plan and reason for itself without human trainers? Today’s leading AI technologies – machine learning, robotic process automation, chatbots – are already transforming organizations in industries ranging from pharmaceutical research labs to insurance companies.

When they arrive, whether in a few decades or years, intelligent machines will bring both tremendous new opportunities and startling new risks.

Brian Mulconrey is SVP at Sureify Labs and Futurist. He lives in Austin, Texas.

data decision maker

Welcome to the VentureBeat community!

DataDecisionMakers is the place where experts, including technical staff, working with data can share data-related insights and innovations.

If you want to read about innovative ideas and up-to-date information, best practices and the future of data and data technology, visit us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read more from DataDecisionMakers

Share.

Comments are closed.