Musk, Wozniak, and Hundreds of Scientists Call For an A.I. Pause

U.S. NEWS


Sam Altman in a green shirt.
OpenAI CEO Sam Altman. The Washington Post via Getty Images

A growing group of tech leaders and computer scientists, including Elon Musk and Apple cofounder Steve Wozniak, are calling for OpenAI and other artificial intelligence labs to pause training A.I. systems that are more advanced than GPT-4, the newest language model behind text generator ChatGPT.

In an open letter titled “Pause Giant A.I. Experiments,” published yesterday (March 28) by nonprofit Future of Life Institute, A.I. companies are urged to draft a shared set of safety protocols around advanced A.I. development before creating more powerful softwares that may pose dangers to humanity.

The letter has collected more than 1,000 signatures from influential entrepreneurs, academics and investors, including Elon Musk, Steve Wozniak, 2020 presidential candidate Andrew Yang, Israeli author Yuval Noah Harari, computer scientist Yoshua Bengio, among others.

The success of ChatGPT has triggered a race among A.I. companies large and small to develop ever more powerful systems that not even their creators can understand or control.

“Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources. Unfortunately, this level of planning and management is not happening,” the letter said.

“We must ask ourselves: Should we automate away all the jobs, including the fulfilling ones?” It continued. “Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders.”

Rising concern over general purpose A.I.

A.I. systems are often described as a black-box technology that, once initiated, no one knows exactly how it works. As new software like ChatGPT become seemingly capable of performing many human tasks, there is a growing fear of future A.I. systems outsmarting and turning against their human creators. It’s a concern for many industry leaders, including the creator of ChatGPT.

“Could a machine decide that humans are a threat, conclude that its interests are different from ours, or simply stop caring about us?” Bill Gates wrote in a blog post last week. “These questions will get more pressing with time.”

Geoffrey Hinton, a computer science professor hailed as “the godfather of A.I.,” recently said A.I. is progressing faster than most people think. “Until quite recently, I thought it was going to be like 20 to 50 years before we have general purpose A.I. And now I think it may be 20 years or less,” Hinton told CBS News over the weekend.

General purpose A.I. or artificial general intelligence (AGI) are terms describing A.I. systems that are capable of doing anything that a human brain can with any limits on the size of memory or speed.

Hinton believes it’s possible for computers to eventually gain the ability to create ideas to improve themselves. “That’s an issue…We have to think hard about how you control that,” he said in the CBS interview.

In late February, OpenAI CEO Sam Altman wrote a blog post specifically addressing the AGI problem. “At some point, it may be important to get independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models,” Altman said in the post.

Supporters of the Future of Life Institute’ message believe that point is now. “A.I. researchers should use this time to develop a shared set of safety protocols for advanced A.I. development,” the open letter said. “This does not mean a pause on A.I. development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities.”

The Future of Life Institute is a nonprofit based in Cambridge, Mass. promoting ethical and responsible use of advanced A.I. The organization is behind a 2018 pledge signed by Musk and the cofounders of DeepMind, an A.I. lab owned by Google, promising never to develop robot killers for warfare use.

Elon Musk, Steve Wozniak Worry ‘Unelected Tech Leaders’ Hold Too Much Power





[source

Rate article
Add a comment