Some have called it the most important and useful advance in AI in years. Others call it crazy accurate AI.
GPT-3 is a new tool from the AI research lab OpenAI. This tool was designed to generate natural language by analyzing thousands of books, Wikipedia entries, social media posts, blogs, and anything in between on the internet. It’s the largest artificial neural network ever created.
In this episode of Short and Sweet AI, I talk in more detail about how GPT-3 works and what it’s used for.
In this episode, find out:
What GPT-3 is
How GPT-3 can generate sentences independently
What supervised vs. unsupervised learning is
How GPT-3 shocked developers by creating computer code
Today I’m talking about a breathtaking breakthrough in AI which you need to know about.
Some have called it the most important and useful advance in AI in years. Others call it crazy, accurate AI. It’s called GPT-3. GPT-3 stands for Generative Pre-trained Transformers 3, meaning it’s the third version to be released. One developer said, “Playing with GPT-3 feels like seeing the future”.
Another Mind-Blowing Tool from OpenAI
GPT-3 is a new AI tool from an artificial intelligence research lab called OpenAI. This neural network has learned to generate natural language by analyzing thousands of digital books, Wikipedia in its entirety, and a trillion words found on social media, blogs, news articles, anything and everything on the internet. A trillion words. Essentially, it’s the largest artificial neural network ever created. And with language models, size really does matter.
It’s a Language Predictor
GPT-3 can answer questions, write essays, summarize long texts, translate languages, take memos, basically, it can create anything that has a language structure. How does it do this? Well it’s a language predictor. If you give it one piece of language, the algorithms are designed to transform and predict what the most useful piece of language should be to follow it.
Machine learning neural networks study words and their meanings and how they differ depending on other words used in the text. The machine analyzes words to understand language. Then it generates sentences by taking words and sentences apart and rebuilding them itself.
Supervised vs Unsupervised machine learning
GPT-3 is a form of machine learning called unsupervised learning. It’s unsupervised because the training data is not labelled as a right or wrong response. It’s free from the limits imposed by using labelled data. This means unsupervised learning can detect all kinds of unknown patterns. The machine works on its own to discover information.
In supervised machine learning, the machine doesn’t learn on its own. The machine is supervised during its training by using data labelled with the correct answer. This method isn’t flexible. It can’t capture more complex relationships or unknown patterns.
Open AI first described GPT 3 in a research paper in May 2020. Then it allowed selected people and developers to use it and report their experiences online of what GPT 3 can do. There’s even an informative article about GPT 3 written entirely by GPT-3.
Judge for Yourself
One researcher used GPT-3 to generate a Harry Potter parody in the style of Ernest Hemingway. Take a listen: "It was a cold day on Privet Drive. A child cried. Harry felt nothing. He was dryer than dust. He had been silent too long. He had not felt love. He had scarcely felt hate. Yet the Dementor’s Kiss killed nothing.”
I think that sounds pretty good!
And there’s a twitter feed called gptwisdom which generates quotes using GPT-3. Here are a few examples:
“Dull as a twice-told tale.” Or: "The point at which a theory ceases to be a theory is called its limit.” Or this thoughtful gpt3 generated quote: “The truthfulness of your simplicity can only grow, as you improve your character.”
Things to Know About This Technology
In essence, GPT-3 is a universal language model. The model learned to identify more than 175 billion different distinguishing features of language. These features are mathematical representations of patterns. The patterns are a map of human language. Using this map, GPT-3 learned to perform all sorts of tasks it was not even built to do.
One unexpected ability is GPT-3 can write computer code. Makes sense, because computer code is a type of language. But this behavior was entirely new. It even surprised the designers of GPT-3. They didn’t build GPT-3 to generate computer code, they trained it to do just one thing. Predict the next word in a sequence of words.
All in all, people discovered it can do many tasks that it wasn’t originally trained to do. They found it could build an app by giving it a description of what they wanted the app to do. It can generate charts and graphs from plain English. It can identify paintings from written descriptions. It can generate quizzes for practice on any topic and explain the answers in detail.
The Best But Flawed
GPT-3’s ability to generate text is the best that has ever been seen in AI. Yet it’s far from flawless. It can spew offensive and biased language and struggles with questions that involve reasoning by analogy. It isn’t guided by any coherent understanding of reality because it doesn’t have an internal model of the world. Sometimes it produces nonsense because it’s essentially word-stringing. Other AI researchers say it’s like a black box and it’s hard to figure out what this thing is doing.
A Machine Like Us
And yet, the consensus is GPT-3 is shockingly good. But because it can generate convincing tweets, blog posts and computer code, people think of it as being like them. They are reading humanity into the GPT-3 system and, as such, run the risk of ignoring its limits. Sam Altman, one of the founders of OpenAI which developed GPT-3, has thanked everyone for their compliments. But he urges caution about the hype. He says, “AI is going to change the world but GPT-3 is just a very early glimpse. We still have a lot to figure out.”
Thanks for listening, I hope you found this helpful. Be curious and if you like this episode, please leave a review and subscribe because then you’ll receive these episodes weekly. From Short and Sweet AI, I’m Dr. Peper.