One of the founding principles of OpenAI, the company behind technology such as GPT-3 and DALL•E, is that AI should be available to all, not just the few.
Co-founded by Elon Musk and five others, OpenAI was partly created to counter the argument that AI could damage society.
OpenAI was originally founded as a non-profit AI research lab. In just six short years, the company has paved the way for some of the biggest breakthroughs in AI. Recent controversy arose when OpenAI announced that a separate section of its company would become for-profit.
In this episode of Short and Sweet AI, I discuss OpenAI’s mission to develop human-level AI that benefits all, not just a few. I also discuss the controversy around OpenAI’s decision to become for-profit.
In this episode, find out:
How human-level AI or AGI differs from Narrow AI
How far we are from using AGI in everyday life
The recent controversy around OpenAI’s decision to switch to a for-profit model.
Hello to you who are curious about AI. I’m Dr. Peper and today I’m talking about a truly innovative company called OpenAI.
So what do we know about OpenAI, the company unleashing all these mind-blowing AI tools such as GPT-3 and DALL·E?
Open AI was founded as a non-profit AI research lab just 6 short years ago by Elon Musk and 5 others who pledged a billion dollars. Musk has been openly critical that AI poses the greatest existential threat to humanity. He was motivated in part to create OpenAI by concerns that human-level AI could damage society if built or used incorrectly.
Human-level AI is known as AGI or Artificial General Intelligence. The AI we have today is called Narrow AI, it’s good at doing one thing. General AI is great at any task. It’s created to learn how to do anything. Narrow AI is great at doing what it was designed for as compared to Artificial General Intelligence which is great at learning how to do what it needs to do.
To be a bit more specific, General AI would be able to learn, plan, reason, communicate in natural language, and integrate all of these skills to apply to any task, just as humans do. It would be human-level AI. It’s the holy grail of the leading AI research groups around the world such as Google’s DeepMind or Elon’s OpenAI: to create artificial general intelligence.
Because AI is accelerated at exponential speed, it’s hard to predict when human-level AI might come within reach. Musk wants computer scientists to build AI in a way that is safe and beneficial to humanity. He acknowledges that in trying to advance friendly AI, we may create the very thing we are concerned about. Yet he thinks the best defense is to empower as many people as possible to have AI. He doesn’t want any one person or a small group of people to have AI superpower.
OpenAI has a 400-word mission statement, which prioritizes AI for all, over its own self-interest. And it’s an environment where its employees treat AI research not as a job but as an identity. The most succinct summary of its mission has been phrased “… an ideal that we want AGI to go well” Two specific parts to its mission are to avoid building human-level AI that harms humanity or unduly concentrates power in the hands of a few.
But there’s a big controversy. OpenAI recently reorganized to form a separate section that’s for profit. It never released the software for GPT-3 as open code for programmers to use and build on. Instead, it licensed GPT-3 exclusively to Microsoft for a billion dollars. OpenAI realized staying a non-profit was financially untenable. It defends its decision explaining it needs billions of dollars to build AGI and fulfill their mission. Personally, I see the necessity for this. I’ve said elsewhere, “If you’re dedicated to your mission, you first have to find consistent funding. We don’t always need more ideas about how to make the world better. We need more ways to consistently fund the ideas we have.” It’s a huge challenge when you realize DeepMind, Open AI’s main competitor, spent 442 million dollars on research the same year OpenAI spent only 11 million.
But there’s been an outcry from critics who say switching to a for profit model is inconsistent with OpenAI’s mission to democratize AI for all. I’d be interested to know what you think about OpenAI’s decision. Do you think its non-profit mission justifies it becoming for profit? Let me know your thoughts and leave a comment.
Thanks for listening, I hope you found this helpful. Be curious and if you like this episode, please leave a review and subscribe because then you’ll receive my episodes weekly. From Short and Sweet AI, I’m Dr. Peper.