TECHNOLOGY

Elon Musk calls for new AI to be stopped for six months, fearing risks to society – Latest news update

  • An open letter – signed by Elon Musk and more than 1,000 others with knowledge, power and influence in the tech space – calls for a halt to all “massive AI experiments” for six months.
  • Anything more powerful than OpenAI’s GPT-4 is considered too risky for society.
  • Human-competitive AI is becoming a more real concern by the day.

The risks of artificial intelligence to society were once distant wonders. But it’s no secret that the development of this technology is moving fast enough to outpace efforts to mitigate the risks. The guardrails are off.

Read also– China has built a hypersonic generator that can power unimaginable weapons

Elon Musk and more than 1,000 others came together to sign an open letter stating they believe those risks are imminent if we don’t slow down our creation of powerful AI systems. The backers — including Emad Mostaque, CEO of Stability AI, researchers at Alphabet-owned DeepMind, and AI historical minds Yoshua Begio and Stuart Russell — join the Future of Life Institute, largely funded by the US, according to a Reuters article. Musk Foundation. Founders Pledge and Silicon Valley Community Foundation.

And there is urgency here. The group is calling for a six-month pause in all “massive AI experiments.”

Read more: -9 tips to boost cell signal on Android and iPhone

In the letter, the signatories asked for a six-month break in the development of high-performance AI systems, defined as anything more powerful than OpenAI’s GPT-4.

“Powerful AI systems should not be developed until we are confident that their effects will be positive and their risks will be manageable,” the letter reads. “Society has shut down other technologies with potentially catastrophic consequences for society. We can do that here.”

Read also– TikTok has been fined $16 million by the UK Commissioner for data breaches involving children

Saying that AI could represent a “sweeping change in the history of life on Earth,” the letter’s backers say there is currently no level of planning and management in place that matches this potential, especially as AI labs continue a ” out-of-control race to develop and deploy increasingly powerful digital minds that no one — not even their creators — can understand, predict, or reliably control.

As AI systems become more able to keep up with human capabilities at common tasks, the letter poses a series of “should we” questions about whether or not to let machines flood information channels with propaganda, automate jobs, developing non-human minds that could replace humans, or risk losing control of civilization in our hunger to create better and better neural networks.

Read more:- How a HELOC can help your business move forward

But, as expected, not everyone agrees. OpenAI CEO Sam Altman did not sign the letter, and Umea University AI researcher Johanna Bjorklund tells Reuters the AI ​​care is all puff. “These kinds of statements are meant to start a hype,” says Bjorklund. “It is meant to make people worry. I don’t think it’s necessary to pull the handbrake.”

OpenAi has said that at some point it may be important to get an independent review before we start training future systems, and for the most advanced efforts to agree to limit the growth rate of new models.







The Latest

To Top