In light of concerns that humanity may be in danger, leading players in artificial intelligence seek to halt the development of strong AI systems.
They claim the race to develop AI systems is out of control in an open letter they have signed, which warns of potential consequences.
Elon Musk, the CEO of Twitter, is one of those calling for a minimum of six months to pass before continuing to train AIs above a particular threshold.
Steve Wozniak, a DeepMind researcher, and other co-founders of Apple also signed.
The developer of ChatGPT, OpenAI, just made available GPT-4, a cutting-edge technology that has stunned observers with its aptitude for tasks like identifying objects in pictures.
The letter, from the Future of Life Institute and signed by the luminaries, asks that progress be temporarily halted at that stage and warns of the potential dangers that future, more complex systems may present.
According to it, society and humanity are at grave risk from AI systems with human-competitive intellect.
A non-profit organization called the Future of Life Institute states that its goal is to “steer transformative technologies away from extreme, large-scale risks and towards benefiting life.”
Mr. Musk, the owner of Twitter and the CEO of the automaker Tesla, is listed as the organization’s external adviser.
The letter claims that careful consideration must go into the development of advanced AIs, but lately, “AI labs have been locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.”
The letter warns that artificial intelligence (AI) could make jobs easier by doing them for people and flood information channels with false information.
The letter comes in response to a recent study by the investment bank Goldman Sachs, which predicted that while AI would probably increase productivity, it also had the potential to automate millions of jobs.
Other experts, however, told the BBC that it was very difficult to predict how AI would affect the labor market.
Outdated and outwitted
The letter poses a more hypothetical question, “Should we develop non-human minds that may ultimately outnumber, outsmart, obsolete, and replace us?”
OpenAI cautioned of the dangers if artificial general intelligence (AGI) were created carelessly in a recent blog post that was cited in the letter: “A misaligned superintelligent AGI could cause grievous harm to the world; an autocratic regime with a decisive superintelligence lead could do that, too.
The company stated that “coordination among AGI efforts to slow down at critical junctures will probably be important.”
The BBC has contacted OpenAI to see if it supports the letter, but the company has not made any comments in the media.
Although he left the organization’s board a number of years ago and has tweeted negatively about its current direction, Mr. Musk was a co-founder of OpenAI.
Like the majority of comparable systems, the autonomous driving features produced by the automaker Tesla relies on AI technology.
The letter requests that “the training of AI systems more powerful than GPT-4 be immediately suspended for at least six months.“
Governments should intervene and impose a moratorium if such a delay cannot be swiftly implemented, it asserts.
It would also be necessary to create “new and capable regulatory authorities dedicated to AI.”
In the US, UK, and EU, several recent proposals for the regulation of technology have been made, but the UK has rejected the idea of an AI-specific regulator.