Meta’s new long-term AI study sounds a lot like OpenAI’s current impasse


Meta recently announced a long-term research partnership to study the human brain. According to the company, it intends to use the results of this study to “guide the development of AI that processes speech and text as efficiently as humans.”

This is the latest in Meta’s ongoing quest for the machine learning equivalent of alchemy: generating thoughts from speech.

The big idea: Meta wants to understand exactly what goes on in people’s brains when they process language. Then it will somehow use that data to develop an AI capable of understanding speech.

According to Meta AI, the company has spent the past two years developing an AI system to process datasets of brainwave information to gain insights into how the brain handles communication.

Now the company is working with research partners to create its own databases.

According to a Meta AI blog post:

Our collaborators at NeuroSpin create an original neuroimaging dataset to expand this research. We will open source the dataset, deep learning models, code, and research papers resulting from these efforts to advance discoveries in both the AI ​​and neuroscience communities. All of this work is part of Meta AI’s broader investments toward human-level AI that learns from limited or no supervision.

The plan is to develop an end-to-end decoder for the human brain. This would involve building a neural network capable of translating raw brainwave data into words or images.

That sounds pretty outlandish, but things quickly spiral into unfamiliar territory as the blog post continues under a sub-headline titled “Towards Human-Level AI”:

Taken together, these studies support an exciting possibility — there are indeed quantifiable similarities between brains and AI models. And these similarities can help provide new insights into how the brain works. This opens up new avenues in which neuroscience will guide the development of smarter AI and where AI, in turn, will help uncover the wonders of the brain.

Meta seems to be following in OpenAI’s footsteps here. Both companies have vested interests in developing artificial general intelligence (AGI) — or an AI that’s generally capable of doing anything a human can.

OpenAI claims AGI is its only mission, while Meta appears to be more of a dilettante while focusing on building the metaverse.

But they both go about it the same way: by trying to access it through natural language processing (NLP).

It is unclear how predicting speech from brainwaves will lead to human-level speech recognition. Just as it is unclear how GPT-3 or future text generators will lead to AGI.

It must be argued that instead of a clear goal, researchers are merely attempting to solve problems in the general realm of human understanding en route to the eventual promise of AGI.

But there’s also the idea that deep learning isn’t robust enough to sufficiently mimic or emulate the human brain to build machines capable of human thinking.

Ultimately, Meta’s work is important in developing machine learning models to analyze brain activity. It’s possible that it could be helpful in expanding our understanding of how the brain works.

But it seems a bit far-fetched to categorize the endeavor as part of the quest for machines capable of human thinking. You’re not teaching the AI ​​to understand speech, you’re teaching it to predict brainwave activity.

Based on the accompanying research cited in the company’s announcement, it appears that Meta is no closer to finding the secret sauce that will turn data-driven insights into something provides vital heat to AI than Tesla or OpenAI is.

It’s time Big Tech stopped looking at every AI advancement as a direct bridge to tomorrow’s sentient robots. And maybe it’s time to consider a different approach to AGI, too.


Comments are closed.