A Beginner’s Guide to the AI ​​Apocalypse: The Democratization of “Expertise”

0

In this series, we examine some of the most popular doomsday scenarios predicted by modern AI experts. Previous Articles ilock in misaligned lenses, artificial stupidity, Wall-E Syndrome, Humanity Joins the Hivemind and Killer Robots.

We’ve covered many topics in this series (see above), but nothing comes close to our next topic. The “democratization of expertise” may sound like a good thing – democracy, expertise, what’s not to like? But it is our intention to convince you that this is the biggest AI-related threat facing our species by the time you finish reading this article.

To understand this properly, we need to revisit an earlier post about what we like to call “WALL-E Syndrome.” This is a made-up condition in which we become so dependent on automation and technology that our bodies become weak and weak to the point that we can no longer function without the physical assistance of machines.

Greetings, humanoids

Subscribe to our newsletter now to get a weekly roundup of our favorite AI stories delivered to your inbox.

When we define what the “democratization of expertise” is, we are specifically talking about what could be best described as “WALL-E syndrome for the brain”.

I want to be careful when I say that we are not referring to the democratization of information, something vital to human freedom.

The big idea

There is a popular board game called “Trivial Pursuit” that challenges players to answer completely unrelated trivia questions from a variety of categories. It’s been around long before the dawn of the internet, so it’s designed to be played with only the knowledge you already have in your brain.

You roll the dice and move a pawn on a board until it comes to a stop, usually on a colored square. You then draw a card from a large deck of questions and try to answer the one that matches the suit you landed on. To see if you were successful, turn the card over and see if your answer matches what is printed.

A Trivial Pursuit game is only as “accurate” as its database. That means if you’re playing the 1999 edition and asked which MLB player holds the record for most home runs in a season, you’ll have to get the question wrong to match the printed answer.

The correct answer is “Barry Bonds at 73”. However, since Bonds did not break the record until 2001, the 1999 edition most likely lists previous record-holder Mark McGwire’s record of 70 in 1998.

The problem with databases, even when expertly curated and hand-lettered, is that they only represent a subset of the data at any given point in time.

Now let’s extend this idea to a database that isn’t curated by experts. Imagine a Trivial Pursuit game that works exactly the same as the vanilla edition, except the answers to each question are crowdsourced from random people.

“What is the lightest element on the periodic table?” Answer, aggregated, according to 100 random people we asked in Times Square: “I don’t know, maybe helium?”

However, in the next issue, the answer might be more like “According to 100 randomly selected high school juniors, the answer is hydrogen.”

What does that have to do with AI?

Sometimes the wisdom of the crowd is useful. For example when trying to figure out what to watch next. But sometimes it’s really silly, like writing the year 1953 and asking 1,000 scientists if women can experience orgasm.

Whether it is useful for large language models (LLMs) depends on how they are used.

LLMs are a type of AI system used in a variety of applications. Google Translate, the chat bot on your bank’s website, and OpenAI’s infamous GPT-3 are examples of the LLM technology used.

In the case of translators and business-focused chatbots, the AI ​​is usually trained on carefully curated sets of information as they serve a narrow purpose.

But many LLMs are purposely trained on giant dumpsters full of unverified data just so the people building them can see what they’re capable of.

Big Tech convinced us that it’s possible to make these machines so big that eventually they just become conscious. The promise there is that they will be able to do anything a human can do, but with the brain of a computer!

And you don’t have to look far to imagine the possibilities. Take 10 minutes and chat with Meta’s BlenderBot 3 (BB3) and you’ll see what all the fuss is about.

It’s a brittle, easily confusing mess that frequently spews out gibberish and thirsty “Let’s be friends!” Nonsense than all that’s consistent, but it’s pretty fun when the parlor trick works just right.

Not only do you chat with the bot, but it is also gamified so that you can create a profile together with it. Eventually the AI ​​decided it was a woman. Another decided that I was actually the actor Paul Greene. All of this is reflected in his so-called “Long Term Memory”:

It also assigns me tags. If we talk about cars, I might get a “likes cars” tag. As you can imagine, it could one day be extremely useful for Meta if it could connect the profile you create while chatting with the bot to its advertising services.

But it doesn’t assign itself tags for its own benefit. It could pretend to remember things without putting labels in its UI. You are for us.

In this way, Meta can make us feel connected to the chatbot and even a little responsible for it.

It’s MY BB3 bot, he remembers ME and he knows what I taught him!

It’s a form of gamification. You must earn these tags (both yours and the AI’s) by speaking. My BB3 AI likes the Joker from the Heath Ledger Batman movie, we had a whole conversation about it. There’s not much difference between that achievement and a high score in a video game, at least as far as my dopamine receptors are concerned.

The truth is, we don’t train these LLMs to do that smarter. We train them to be better at outputting text, which leads to us wanting them to output more text.

Is that something bad?

The problem is that BB3 was trained on a dataset so large we call it “internet size”. It contains trillions of files ranging from Wikipedia entries to Reddit posts.

It would be impossible for humans to sift through all the data, so it’s impossible for us to know exactly what’s in it. But billions of people use the internet every day, and it seems like for every person who says something smart, eight people say things that make no sense to anyone. It’s all in the database. If someone said it on Reddit or Twitter, it was probably used to train people like BB3.

Despite this, Meta designs it to mimic human trustworthiness and seemingly maintain our commitment.

It’s a short hop from creating a chatbot that appears human to tweaking its output to convince the average person that it’s smarter than them.

At least we can fight killer robots. But if even a fraction of the people using Meta’s Facebook app started trusting a chatbot over human experts, it could have horribly detrimental effects on our entire species.

What’s the worst that could happen?

We have observed this to a small extent during the pandemic lockdowns. Millions of people without medical training have chosen to ignore medical advice because of their political ideology.

Given the choice to believe politicians with no medical training or the overwhelming peer-reviewed, research-backed consensus of the global medical community, millions chose to “trust” politicians more than scientists.

The democratization of expertise, the idea that anyone can be an expert if they have access to the right data at the right time, is a serious threat to our species. It teaches us to trust any idea as long as the crowd thinks it makes sense.

For example, we believe that pop rocks and Coca-Cola are a deadly combination, cops hate the color red, dogs can only see in black and white, and humans only use 10 percent of their brains. All of these are myths, but at some point in our history each of them was considered “common knowledge”.

And while it may be quite humane to spread misinformation out of ignorance, democratizing expertise on the scales that meta can reach (nearly 1/3 of people on Earth use Facebook monthly) could have a potential disastrous impact on ability of mankind to differentiate between shit and Shinola.

In other words, it doesn’t matter how smart the smartest people in the world are when the general public trusts a chatbot trained on data created by the general public.

As these machines become more powerful and better at mimicking human speech, we’ll be approaching a terrible tipping point where their ability to convince us that what they’re saying makes sense will interfere with our ability to spot bullshit will be far surpassed.

The democratization of expertise happens when everyone believes they are an expert. Traditionally, the ideas market tends to settle things when someone claims to be an expert but doesn’t seem to know what they’re talking about.

We often see this on social media when someone is being judged for explaining something to someone who knows a lot more about the subject than they do.

What if all armchair experts got an AI companion to cheer them on?

If the Facebook app can demand so much of our attention that we forget to pick our kids up from school or text while driving because it overrides our logic, what do you think meta will do with a state-of-the-art chatbot can that was developed? to tell every single nut on the planet what they want to hear?

Share.

Comments are closed.