Elon Musk, Stuart Russell, Steve Wozniak and others signed an open letter asking to stop training AI systems for six months.
Takeaway Points
- Elon Musk, Stuart Russell, Steve Wozniak and other tech leaders called for a pause in AI training for six months.
- This was stated in an open letter signed by them to all Ai training labs.
- The pause aims to prevent the creation of an AI that will be risky to humanity.
Who Signed the AI Letter?
Elon Musk and other tech leaders wrote an open letter to all labs asking them to pause training AI systems that are more powerful than GPT_4 for at least six months. According to them, creating an AI that has human-competitive intelligence is risky for humanity.
They further added that, so many AI labs have been creating AI that is becoming more powerful without care and plan.
“Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources. Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one-not even their creators-can understand, predict, or reliably control.”
Why is Elon Musk warning About AI?
According to the report, some questions we need to ask ourselves are. “Should we let machines flood our information channels with propaganda and untruth? Should we risk loss of control of our civilization? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete, and replace us? Should we automate away all the jobs, including the fulfilling ones?”
Policies and regulations
Elon Musk and others who signed the letter concluded that AI systems will continue to be developed after the pause when they are sure that they can be controlled and won’t pose a threat to humanity and society.
They further added that “AI developers must work with policymakers to dramatically accelerate the development of robust AI governance systems. These should at a minimum include: new and capable regulatory authorities dedicated to AI; oversight and tracking of highly capable AI systems and large pools of computational capability; provenance and watermarking systems to help distinguish real from synthetic and to track model leaks; a robust auditing and certification ecosystem; liability for AI-caused harm; robust public funding for technical AI safety research; and well-resourced institutions for coping with the dramatic economic and political disruptions (especially to democracy) that AI will cause.”
Recently, so many companies have been building their own AI, which led to the call for a pause in AI training.