MIT professor likens ignoring AGI to ‘not looking it up’

MIT professor likens ignoring AGI to 'not looking it up'

MIT professor and AI researcher Max Tegmark is quite stressed about the potential impact of artificial general intelligence (AGI) on human society. In a new essay for Timeit rings the alarm bells and paints a rather bleak picture of a future ruled by an AI that can outwit us.

“Unfortunately, I now feel like we’re living the movie ‘Don’t Look Up’ for another existential threat: unaligned superintelligence,” Tegmark wrote, comparing what he perceives as a careless Responding to a growing AGI threat to director Adam McKay’s popular climate satire.

For those who haven’t seen it, “Don’t Look Up” is a fictional story about a team of astronomers who set out to warn the rest of human society after discovering that a species-destroying asteroid is approaching the planet earth rushes. But to their surprise and frustration, a large portion of humanity doesn’t care.

The asteroid is a great metaphor for climate change. But Tegmark thinks the story can apply to AGI risk as well.

“A recent survey showed that half of AI researchers believe that AI has at least a 10 percent chance of causing human extinction,” the researcher continued. “Having spent so much time pondering this threat and what to do about it, from academic conferences to Hollywood blockbusters, one might expect humanity to go into overdrive with the mission to push AI in a safer direction would. Control superintelligence.”

“Think again,” he added, “instead, the most influential reactions have been a combination of denial, derision and resignation that is darkly funny enough to deserve an Oscar.”

In short, according to Tegmark, AGI is a very real threat, and human society isn’t doing nearly enough to stop it — or at least isn’t making sure AGI is properly balanced with human values ​​and security.

And just like in McKay’s film, humanity has two choices: start taking serious steps to counter the threat — or, if things are going as they did in the film, watch our species perish.

Tegmark’s claim is quite provocative, especially considering there are many experts out there as well I do not agree that AGI will ever really materialize, or argue that it will take a very long time, if ever, to happen. Tegmark addresses this discrepancy in his paper, although his argument is arguably not the most convincing.

“I’m often told that AGI and superintelligence won’t happen because it’s impossible: human-level intelligence is something mysterious that can only exist in brains,” Tegmark writes. “Such carbon chauvinism ignores a core tenet of the AI ​​revolution: that intelligence is about information processing, and it doesn’t matter whether the information is processed by carbon atoms in brains or silicon atoms in computers.”

Tegmark even goes so far as to claim that superintelligence is “not a long-term problem” but “even more short-term than, say, climate change and most people’s retirement plans.” To support his theory, the researcher referred to a recent Microsoft study arguing that OpenAI’s GPT-4 large language model is already showing “sparks” of AGI, and to a recent talk by the deep learning researcher Yoshua Bengio.

While the Microsoft study wasn’t peer-reviewed and reads more like marketing material, Bengio’s warning is far more compelling. Its call to action is based much more on what we don’t know about the machine learning programs already existas opposed to big claims about technology that doesn’t exist yet.

To that end, the current generation of less sophisticated AIs already pose a threat, from synthetic content spreading misinformation to the threat of AI-powered weapons.

And the industry in general, as Tegmark goes on to note, hasn’t exactly done an amazing job of ensuring slow and secure development so far, arguing that we shouldn’t be teaching it how to code, connect it to the internet or anything there is a public API.

It is still unclear whether and when AGI could ultimately come into play.

While there is certainly financial incentive for the field to move quickly, many experts agree that whether AGI is around the corner or light years away, we should slow down the development of more advanced AI.

And meanwhile, Tegmark argues that we should agree that a very real threat lies ahead before it’s too late.

“Though humanity is racing toward a cliff, we’re not there yet, and there is still time for us to slow down, change course and avoid a fall – and instead reap the amazing benefits that a safe, aligned.” AI has to offer,” Tegmark writes. “To do this, you have to agree that the cliff actually exists and that falling off it is of no use to anyone.”

“Just look up!” he added.

More on AI: Elon Musk says he’s building ‘maximum truth-seeking AI’

#MIT #professor #likens #ignoring #AGI

universe today

A black hole ripped a star apart. The closest we’ve ever seen.

Voyager 2: Spacecraft with large white parabolic antenna and long thin antennas.

The Voyager 2 science mission has been extended for another 3 years