- The Washington Times - Friday, July 7, 2023

OpenAI is assembling a team to prevent emerging artificial intelligence technology from going rogue and fueling the extinction of humanity, which the company now fears is a real possibility.

The makers of the popular chatbot ChatGPT say AI will power new superintelligence that will help solve the world’s most important problems and be the most consequential technology ever invented by humans. 

And yet, OpenAI’s Ilya Sutskever and Jan Leike warned that humans are not prepared to handle technology smarter than they are. 



“The vast power of superintelligence could also be very dangerous, and could lead to the disempowerment of humanity or even human extinction,” Mr. Sutskever and Mr. Leike wrote on OpenAI’s blog. “While superintelligence seems far off now, we believe it could arrive this decade.”

If an AI-fueled apocalypse is right around the corner, OpenAI’s brightest minds say they have no plan to stop it. 

“Currently, we don’t have a solution for steering or controlling a potentially superintelligent AI, and preventing it from going rogue,” Mr. Sutskever and Mr. Leike wrote. “Our current techniques for aligning AI, such as reinforcement learning from human feedback, rely on humans’ ability to supervise AI. But humans won’t be able to reliably supervise AI systems much smarter than us.”

Mr. Sutskever, OpenAI co-founder and chief scientist, and Mr. Leike, OpenAI alignment head, said they are assembling a new team of researchers and engineers to help forestall the apocalypse by solving the technical challenges of superintelligence. They’ve given the team four years to complete the task.

The potential end of humanity sounds bad, but the OpenAI leaders said they remain hopeful they will solve the problem.   

“While this is an incredibly ambitious goal and we’re not guaranteed to succeed, we are optimistic that a focused, concerted effort can solve this problem,” the OpenAI duo wrote. “There are many ideas that have shown promise in preliminary experiments, we have increasingly useful metrics for progress, and we can use today’s models to study many of these problems empirically.”

Mr. Sutskever and Mr. Leike said they intended to share the results of their work widely. They said OpenAI is hiring research engineers, scientists and managers who want to help stop the nerds’ new toys from enslaving or eliminating mankind. 

Policymakers in Washington are also fretting about AI danger. Senate Majority Leader Charles E. Schumer, New York Democrat, has called for new rules to govern the technology, and the Senate Judiciary Committee has become a center for hearings on oversight of AI. 

The committee’s growing investigation of AI has included examinations of fears that AI may enable cyberattacks, political destabilization and the deployment of weapons of mass destruction.  

OpenAI CEO Sam Altman called for regulation when he testified before the Senate Judiciary’s subcommittee on privacy, technology and the law in May. Mr. Altman said he harbored concerns about the potential abuse of AI tools for manipulating people. 

Senate Judiciary Chairman Richard J. Durbin has expressed an interest in creating an “accountability regime for AI” to include potential federal and state civil liability for when AI tools do harm.

Big Tech companies such as Google and Microsoft, a benefactor of OpenAI, have also called for new regulation of artificial intelligence, and the federal government is listening. 

The Biden administration is busy crafting a national AI strategy that the White House Office of Science and Technology Policy has billed as taking a “whole of society” approach. 

Top White House officials met multiple times per week on AI as the White House chief of staff’s office has worked on an effort to choose the next steps for President Biden to take on AI, a White House official said in June. 

OpenAI said Thursday it is making GPT-4, which is its “most capable” AI model, generally available to increase its accessibility to developers.

• Ryan Lovelace can be reached at rlovelace@washingtontimes.com.

Copyright © 2023 The Washington Times, LLC. Click here for reprint permission.

Please read our comment policy before commenting.

Click to Read More and View Comments

Click to Hide