Can a computer win the Nobel Prize for Literature? GPT-4 could bring us closer to the epic potential of AI

0

Reports of the upcoming release of the next iteration of a neural network model for machine learning developed by OpenAI, a San Francisco-based company co-founder of billionaire Elon Musk, have caused a stir in the artificial intelligence community . The language ability of the GPT-3, which was introduced in 2020, makes people curious about the upcoming model. From writing books to creating computer code and poetry, GPT-3 gave an insight into what AI can do. Since GPT-4 is only expected to be an improvement over the previous version, here is how much artificial intelligence is seen on the way to achieving human-like capabilities.

What is GPT-3

How about a satirical piece based on writer Jerome K. Jerome from ‘Three Men In A Boat’: “It’s a strange fact that the last remaining form of social life that Londoners are still interested in is Twitter is. This strange fact surprised me when I went to the seaside on one of my regular vacations and the whole area chirped like a starry cage. I called it an anomaly, and it is. “Before commenting on style and form and parallels to the English humorist’s language, it must be noted that this paragraph (and the entire six-page story it opens) was written by GPT-3.

GPT-3, short for “Generative Pre-trained Transformer” or “Generative Pre-Training”, is the third generation of the model that was trained with data gathered while crawling the Internet to generate text. GPT-3 can learn anything structured like a language and then perform tasks related to that language. For example, it can be trained to write press releases and tweets, but also computer code. In this sense, such an AI is called language predictive and carries out so-called Natural Language Processing (NLP) tasks. That is, “it is an algorithmic structure designed to take a piece of language (input) and transform it into what it predicts as the most useful subsequent piece of language for the user,” writes author Bernard Marr in Forbes.

GPT-3 has not been made available to the public and reports from October last year have given it access to selected experts who have now given an insight into all of the tasks it can perform.

How will GPT-4 be different?

Prior to GPT-3, there were GPT-2 and GPT-1, introduced by OpenAI in 2019 and 2018, respectively. But they were like young steps that led to the launch of GPT-3. While GPT-2 had 1.5 billion parameters, GPT-3 had 175 billion, making it the largest artificial neural network ever created and 10 times more powerful than what it surpassed – the Turing NLG Microsoft model that ha.

An artificial neural network (ANN) is a system that mimics the way the brain works and enables “computer programs to recognize patterns and solve common problems in the areas of AI, machine learning and deep learning,” says IBM.

“Pre-trained” for GPT and similar NLP programs means that such models have been fed huge amounts of data to understand the rules of the language, the variations in the meaning of words, etc. After such a model has been trained, it can be made from a generate output from a simple command prompt. For example, for the story a la Jerome K. Jerome says the user who tries GPT-3 on Twitter that “All I put was the title, the author’s name and the first” It “, # gpt3 did the rest.

Now we come to GPT-4. According to reports, following the trend of releasing a new version every year, OpenAI will soon be releasing a version for expert testing. So it has been suggested that a version of GPT-4 could come out early next year or 2023. And it is widely expected that it would be a game changer.

A report from Towards Data Science (TDS) said GPT-4 could have 100 trillion parameters and will be “five hundred times” larger than GPT-3. “The brain has around 80 to 100 billion neurons (GPT-3 order of magnitude) and around 100 trillion synapses. GPT-4 will have as many parameters as the brain has synapses. The sheer size of such a neural network could bring about qualitative leaps from GPT-3 that we can only imagine, “she added.

The TDS report also said that GPT-4 “is unlikely to be just a language model,” referring to a December 2020 article by Ilya Sutskever, the senior scientist at OpenAI, in which he said that in 2021, “language models will begin ”. to become aware of the visual world “.

However, Sam Altman, the CEO of OpenAI, reportedly said that GPT-4 will not be larger than GPT-3 but will use more computing resources.

So how close are we to AI, which is as good as human intelligence?

The stated goal of OpenAI is to create artificial general intelligence, i.e. AI that has the same intelligence that a normal person has. It’s something that sounds a lot simpler than it actually is. As OpenAI itself states: “AI systems today have impressive but limited capabilities. It seems that we continue to cut off from their constraints and, in extreme cases, achieve human performance in practically every intellectual task. “

But it adds that “it is difficult to comprehend how much AI on a human scale could benefit society, and it is equally difficult to imagine how much it could harm society if built or used incorrectly”. “To advance digital intelligence in such a way that it is most likely to benefit humanity as a whole”.

When asked how long that will take, OpenAI says that “it’s hard to predict when AI might come into range on a human scale”. be the best way to crack the AI ​​code, although the organization believes in this way, it should be noted that for a long time it was believed that such a program would come close if AI could be developed to work in a game like Chess surpassing mimicking human thought patterns. The fact is, however, that “the solution to any task was much less general than people had hoped for”.

But it focused on deep learning because the strategy turned out to have “excellent results on pattern recognition problems such as recognizing objects in images, machine translation, and speech recognition,” which is now a glimpse of the “what?” it could be for computers to be creative, to dream and to see the world “.

However, NPT-3 isn’t exactly the perfect text-writing tool that people can rely on completely. As Altman himself said, “The GPT-3 hype is too much. AI will change the world, but GPT-3 is just a first glimpse. ”Pointing out one of its drawbacks, Marr writes,“ Although it can handle tasks like creating short texts or basic applications, its output will be less useful (actually than ‘Gibberish’) when it comes to producing something longer or more complex “.

But while it is believed that future iterations of such AI systems would improve the cracks of previous generations, not everyone is convinced. TDS quotes Stuart Russell, a computer science professor at the University of California at Berkeley and an AI pioneer, as saying that “focusing on pure computing power totally misses the point … we don’t know how to make a machine really intelligent – even if it does it would be “the size of the universe.” That is, deep learning alone may not be enough to achieve intelligence on a human scale.

But it’s an approach that some of the biggest names in tech are taking nonetheless. This is the path taken by Microsoft, for example, which is also involved in OpenAI. It brought out some sort of explanatory, “generated by the Turing NLG language model itself,” which says, “Massive deep learning language models … with billions of parameters learned from virtually all texts published on the Internet have the state of the art in almost every downstream natural language processing (NLP) task, including question answering, conversational agent, and document understanding among others “.

All of this means that scientists are yet to write off human-like AI, although many say, as stated in a report by consultancy McKinsey, artificial general intelligence “is nowhere near real”. But the report says that “many academics and researchers claim that there is at least a chance that artificial intelligence on a human scale could be achieved in the next decade”.

“Understanding AI at the human level will be a profound scientific achievement (and an economic blessing) and could happen by 2030 (25 percent chance) or by 2040 (50 percent chance) – or never (10 percent chance)).” Richard Sutton, professor of computer science at the University of Alberta is said to have said.

Read all the latest news, breaking news and coronavirus news here



Source link

Share.

Leave A Reply