OpenAI CEO Sam Altman says AI will reshape society, acknowledging risks: ‘A little bit scared of this’
The CEO behind the company that created ChatGPT believes in artificial intelligence technology will reshape society as we know it. He believes it comes with real dangers, but can also be “the greatest technology humanity has yet developed” to drastically improve our lives.
“We’ve got to be careful here,” said Sam Altman, CEO of OpenAI. “I think people should be happy that we are a little bit scared of this.”
Altman sat down for an exclusive interview with ABC News’ chief business, technology and economics correspondent Rebecca Jarvis to talk about the rollout of GPT-4 — the latest iteration of the AI language model.
In his interview, Altman was emphatic that OpenAI needs both regulators and society to be as involved as possible with the rollout of ChatGPT — insisting that feedback will help determine the potential negative consequences the technology could have on humanity. He added that he is in “regular contact” with government officials.
ChatGPT is an AI language model, the GPT stands for Generative Pre-trained Transformers.
Released only a few months ago, it is already considered the fastest-growing consumer application in history. The app hit 100 million monthly active users in just a few months. In comparison, TikTok took nine months to reach that many users and Instagram took nearly three years, according to a UBS study.
Watch the exclusive interview with Sam Altman on “World News Tonight with David Muir” at 6:30 pm ET on ABC.
Though “not perfect,” per Altman, GPT-4 scored in the 90th percentile on the Uniform Bar Exam. It also scored a near-perfect score on the SAT Math test, and it can now proficiently write computer code in most programming languages.
GPT-4 is just one step toward OpenAI’s goal to eventually build Artificial General Intelligence, which is when AI crosses a powerful threshold which could be described as AI systems that are generally smarter than humans.
Even though he celebrated the success of his product, Altman acknowledged the possibly dangerous implementation of AI that kept him up at night.
“I’m particularly worried that these models could be used for large-scale disinformation,” Altman said. “Now that they’re getting better at writing computer code, (they) could be used for offensive cyberattacks.”
A common sci-fi fear that Altman doesn’t share: AI models that