What makes Gary Markus angry?

0

“The current AI is illiterate. It can fool itself, but it doesn’t understand what it’s reading.”

Gary Marcus in an interview with Analytics India Magazine

The famous NYU psychology and neuroscience professor is a popular critic of deep learning and AGI. Gary has discussed, written articles, and published books with Yann LeCun and Yoshua Bengio to critique the current approach to deep learning.

Back in the early 1990s, Gary and Pinker had published a paper claiming that neural networks could not even learn the language skills a small child could. In 2012, Gary published an article in The New Yorker entitled “Is deep learning a revolution in AI?‘ which states that the techniques employed by Geoff Hinton were not powerful enough to understand the fundamentals of natural language, let alone duplicate human thought.

“Hinton built a better ladder, but a better ladder doesn’t necessarily get you to the moon,” he wrote. In 2022, he still thinks it will hold up. Gary spoke about the path we are on versus the path we should be on for AGI. “The specific path we are on is big language models, an extension of big data. My view on this is not optimistic. They are less amazing in their ability not to be toxic, to tell the truth, or to be reliable. I don’t think we want to build a general intelligence that is unreliable, misinforms people, and potentially dangerous. For example, you have GPT-3, which recommends people to commit suicide.

There have been tremendous advances in machine translation, but not in machine understanding. Moral reasoning is nowhere to be found, and I don’t think AI is a healthy field right now. People don’t recognize the borders. DALL-E seems to be progressing in a way because it’s taking these very pretty pictures. In other respects, however, it is not progressing at all. It did not solve the problem of languages. It recognizes some parts of what you say, but doesn’t recognize their relationships. This problem will not magically go away. We may have billions of data today, but these fundamental problems surrounding compositionality still have no solution. So AI is not reliable,” he said.

Arguing for nativism

In philosophy, empiricism is the view that all concepts come from experience and one only learns from experience. Artificial intelligence builds on this very foundation, and therefore models are trained on tons of data.

Gary takes the opposite position, in favor of innate knowledge. “If you look at the data for humans and other animals, we are born with a knowledge of the world. And unfortunately, most computer scientists aren’t trained in developmental psychology,” he said. In a 2017 debate with Yann LeCun, Marcus argued that deep learning is not capable of much more than simple perceptual tasks.

“If neural networks have taught us anything, it’s that pure empiricism has its limits,” he said in the debate. He further discussed the disadvantages of the empiricist approach. “Large networks have no built-in representations of time, only a marginal representation of space, and no representation of an object. At its core, speech is about relating sentences you hear, and systems like GPT-3 never do that. You give them all the data in the world and they still don’t get the idea that language is about semantics. You are making an illusion. You can’t see the irony, sarcasm, or contradiction. I see these systems as a test of the empirical hypothesis, and it fails.”

What about the technological singularity?

The idea of ​​the technological singularity is that ordinary humans will one day be overtaken by artificially intelligent machines or cognitively enhanced biological intelligence. The discussion has shifted from the realm of science fiction to a serious debate. But Gary believes there is “no magical moment of singularity.” Because intelligence is not singular, but different facets, sometimes it is better than humans and sometimes not.

“In sheer computing power, machines outperform humans. There is no question that a computer is better at analyzing positions on a chessboard than the average human. But an eight-year-old’s ability to look at a Pixar film and understand what’s happening far surpasses any machine,” Gary said. “The current AI is illiterate. It can pretend, but it doesn’t understand what it’s reading. The idea that all of these things will change one day and that machines will be smarter than humans on that magical day is an oversimplification.”

Is the Turing test reliable?

The Turing test, developed in 1950 by Alan Turing, the founding father of AI, is one of the most classic tests used to measure AI growth. It is played in the form of a game between two humans and a computer, with the end goal that the computer has to fool the humans into believing that the machine is just as human. But when it comes to measuring AI today, Gary says, “The Turing test isn’t very good. Humans are gullible and machines can be evasive. If you want to build a system that deceives people, don’t answer some of these questions,” he said.

“Eugene Gustman was a system that won a small version of the competition for a few minutes by pretending not to speak English. He pretended to be a 13-year-old boy from Odessa whose English was not perfect. It responded with quips so as not to reveal its limitations, and to the untrained eye it was quite convincing. A professional could still recognize limits. All that tells us is that humans think machines that can talk are intelligent, but it turned out to be wrong.

This is what happens with GPT-3. It says clever things sometimes, but it doesn’t give a long-term vision of what you’re talking about. It doesn’t remember answers from one minute to the next; there is no consistency, there is no real intelligence. Humans aren’t good at testing, and machines pick that up and end up passing the test. But that doesn’t mean they understand what we ultimately want.”

Online Review and Gary’s Beef with Yann LeCun

Marcus leaves no stone unturned to flaunt his ferocity by calling out to the “celebrities” of the AI ​​community. This is evident in his debates with AI pioneers Yann Lecun in 2017 and Yoshua Bengio in 2019. Indeed, in response to his 2018 critique of deep learning, LeCun said, “The number of valuable recommendations Gary Marcus has ever made , is exactly zero .’ While making this comment online, LeCun had agreed with Gary’s statement to NYU audiences about the need for innate machinery.

LeCun kind of bullied me

said Gary

“I said there were ten problems and he said most were wrong. He didn’t write a long review, he just said I’m mostly wrong and no one has ever shown them wrong. And it was like people following a political leader; Everyone on Twitter lashed out at me. If a famous person tells you you’re wrong, that doesn’t mean you’re wrong.”

“I wrote a paper saying that deep learning hits a wall, and people started making cartoons about deep learning kicking over the wall. None of them seemed to have read the intellectual content – which doesn’t mean you can’t do deep learning, but that you have to use it in conjunction with other systems because of these particular weaknesses.”

Large language models are the wrong technology for responsible AI

Responsible and ethical AI is one of the main concerns of the technology sector today. Some key examples of this are GPT-3 telling a person to commit suicide, or Delphi saying that genocide would be okay if everyone agreed to it. “Large language models are not the right technology for responsible and ethical AI,” said Gary. “They are very good at capturing statistical associations, but they are not good at acting responsibly and ethically. As long as most investments are made in it, we have a problem.

There’s nothing wrong with having large amounts of data, and other things being equal, large amounts are better than small ones. But we need systems that can accept explicit English representations and reason accordingly. We need technology where a system can suggest an action, say, reply to a user and rate it – could this do harm? We have to have that and we don’t have that. We’re not even close. No system can read a conversation. The best we have are content filters that look for hateful language. But they are very naïve when it comes to how they evaluate hate speech or misinformation.”

While Gary wasn’t sure which direction AI development would take, he pointed out that relying on historical data or just considering the ethics of AI researchers were two of the key factors making AI less accountable. He also spoke about the hybrid approach to building AI models that combines classic and deep learning systems. Within the hybrid approaches, the neuro-symbolic approach is what we know so little about the technique. Neurosymbolic AI would include neural networks and symbolic systems.

“You have to be able to represent things abstractly and symbolically. I just don’t see how to get into AI without at least doing that,” he said.

Share.

Comments are closed.