In an open letter highlighting potential hazards to society and all humankind, Elon Musk and a team of artificial intelligence specialists and business executives are urging a six-month halt to the AI development of systems more potent than OpenAI’s recently released GPT-4.
More than 1 thousand people, including Musk, signed the letter from the nonprofit Future of Life Institute, which urged a moratorium on the creation of advanced artificial intelligence (AI) until standardized safety guidelines for such systems were created, put into place, and independently audited.
“Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable,” This statement was in the letter.
- An inquiry for comment was not immediately answered by OpenAI.
The letter identified possible hazards to society and civilization from human-competitive AI systems, including disruptions to the economy and politics, and urged developers to collaborate with regulators on governance & regulatory agencies.
CEO of Stability AI Emad Mostaque, experts at Alphabet-owned DeepMind, with AI titans Stuart Russell and Yoshua Bengio, both of whom are frequently referred to as the “godfathers of AI,” were co-signatories.
The Future of Life Institute is principally supported by the Musk Foundation, Silicon Valley Community Foundation, the London-based effective altruism organization Founders Pledge, and the European Union’s transparency record, which lists these sources as well.