Photo edit of Elon Musk/Twitter. Credit: Alexander J. Williams III/Popacta
Photo edit of Elon Musk/Twitter. Credit: Alexander J. Williams III/Popacta

A group of more than 1,000 tech experts, including Elon Musk and Steve Wozniak, have signed an open letter calling for a temporary pause on the development of AI systems more powerful than OpenAI’s GPT-4. The letter, issued by the Future of Life Institute, highlights the potential risks posed by powerful AI to society and civilization and calls for safety protocols to be developed by independent overseers. While the signatories do not call for a complete halt to AI development, they emphasize the need for robust governance systems to ensure that the positive effects of AI can be guaranteed, and their risks can be managed. However, concerns have been raised about potential conflicts of interest in the funding of the Future of Life Institute.

Tech Experts Call for Pause on Powerful AI Development

In an open letter signed by more than 1,000 people, including Elon Musk and Steve Wozniak, the Future of Life Institute has urged AI labs to temporarily pause the development of AI systems more powerful than OpenAI’s GPT-4 for at least six months. The letter highlights the potential risks to society and civilization posed by powerful AI, calling for safety protocols to be developed by independent overseers to guide the future of AI systems. The signatories emphasize that AI development in general should not be paused, but rather a stepping back from the “dangerous race” towards unpredictable black-box models with emergent capabilities.

The Risks of Powerful AI

The undersigned tech experts warn that at this stage, nobody can understand, predict, or reliably control the powerful new tools developed in AI labs. They cite the risks of propaganda and lies spread through AI-generated articles that look real, and the possibility that AI programs can outperform workers and make jobs obsolete. Therefore, they urge AI labs and independent experts to use the pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts.

The call for a pause on the development of powerful AI highlights the risks posed by this rapidly advancing technology. It emphasizes the need for safety protocols and governance systems to be put in place to manage the risks and ensure the positive effects of AI development. While there may be concerns about funding sources and conflicts of interest, the signatories’ goal is to promote responsible AI development that benefits society and civilization.

AI Development and Governance

While the letter calls for a pause on the development of powerful AI, the signatories do not call for a complete halt to AI development. Instead, they urge AI developers to work with policymakers to dramatically accelerate the development of robust AI governance systems, ensuring that powerful AI systems are developed only once their positive effects can be guaranteed, and their risks can be managed.

The letter written by Elon Musk and over 1,000 other tech experts made this request to AI developers:

“Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable,”

“[No one] can understand, predict, or reliably control” the powerful new tools developed in AI labs. The undersigned tech experts cite the risks of propaganda and lies spread through AI-generated articles that look real, and even the possibility that Ai programs can outperform workers and make jobs obsolete.

Funding and Funding Sources

The Future of Life Institute is primarily funded by the Musk Foundation, Effective Altruism group Founders Pledge, and Silicon Valley Community Foundation. This raises concerns about potential conflicts of interest in their call for a pause on the development of AI. However, the signatories insist that their call for safety protocols is in the interest of society and civilization as a whole.



Comments

  1. Two things you just aren’t going to stop. Advancement of technology and the decline of civilization.

  2. You’ll only get out of it is what you put into this brain. Feed it woke garbage in and get woke garbage out. I’d say kill it dead but that will not stop other countries from progressing on it. Places like China, Russia and other off the wall countries.

Leave a Reply

Your email address will not be published. Required fields are marked *