in

What exactly are the dangers posed by AI?

What exactly are the dangers posed by AI?

In late March, more than 1,000 technology leaders, researchers and other experts working in and around artificial intelligence signed an open letter warning that AI technologies pose “profound risks to society and humanity.”

The group, which also included Tesla CEO and Twitter owner Elon Musk, has urged AI labs to halt development of their most powerful systems for six months so they can better understand the dangers behind the technology.

“Powerful AI systems should only be developed when we are certain that their impact is positive and their risks manageable,” the letter reads.

The letter, which now has more than 27,000 signatures, was short. His language was broad. And some of the names behind the letter seemed to have a conflicting relationship with KI. Mr. Musk, for example, is building his own AI startup and is a major donor to the organization that wrote the letter.

But the letter highlighted a growing concern among AI experts that the latest systems, particularly GPT-4, the technology introduced by San Francisco start-up OpenAI, could harm society. They believed that future systems will be even more dangerous.

Some of the risks have arrived. Others not for months or years. Still others are purely hypothetical.

“Our ability to understand what could go wrong with very powerful AI systems is very weak,” said Yoshua Bengio, professor and AI researcher at the University of Montreal. “So we have to be very careful.”

dr Bengio is perhaps the most important person who signed the letter.

Working with two other academics—Geoffrey Hinton, until recently a researcher at Google, and Yann LeCun, now senior AI scientist at Meta, the owner of Facebook—Dr. Bengio has spent the last four decades developing the technology that powers systems like GPT-4. In 2018, the researchers received the Turing Award, often referred to as the “Nobel Prize in Computer Science”, for their work on neural networks.

A neural network is a mathematical system that learns skills by analyzing data. About five years ago, companies like Google, Microsoft, and OpenAI started building neural networks that learned from huge amounts of digital text, called Large Language Models, or LLMs

By finding patterns in this text, LLMs learn to generate text themselves, including blog posts, poetry, and computer programs. You can even have a conversation.

This technology can help computer programmers, writers, and other workers generate ideas and get things done faster. But dr Bengio and other experts also cautioned that LLMs can learn undesirable and unexpected behaviors.

These systems can generate untrue, biased, and otherwise toxic information. Systems like GPT-4 misunderstand facts and make up information, a phenomenon called “hallucination”.

Companies are working on these problems. Experts like Dr. However, Bengio fear that new risks will emerge as researchers make these systems more powerful.

Because these systems provide information with what appears to be absolute reliability, it can be difficult to separate truth from fiction when using them. Experts fear people will rely on these systems for medical advice, emotional support and the raw data they use to make decisions.

“There is no guarantee that these systems will be correct for every task you give them,” said Subbarao Kambhampati, a professor of computer science at Arizona State University.

Experts also fear that people will abuse these systems to spread disinformation. Because they can converse in human-like ways, they can be surprisingly persuasive.

“We now have systems that can interact with us through natural language, and we can’t tell the real from the fake,” said Dr. bengio

Experts fear that the new AI could become a job killer. At the moment technologies like GPT-4 tend to complement human workers. But OpenAI admits they could replace some workers, including people who moderate content on the web.

You cannot yet copy the work of lawyers, accountants or doctors. But they could replace paralegals, personal assistants and translators.

A paper authored by OpenAI researchers estimates that 80 percent of US workers could see at least 10 percent of their job duties affected by LLMs, and that 19 percent of workers could see at least 50 percent of their job duties affected.

“There are signs that routine jobs are going away,” said Oren Etzioni, the founding chair of the Allen Institute for AI, a research lab in Seattle.

Some signers of the letter also believe that artificial intelligence could be beyond our control or destroy humanity. But many experts say that’s a gross exaggeration.

The letter was authored by a group from the Future of Life Institute, an organization dedicated to researching existential risks facing humanity. They warn that because AI systems learn unexpected behavior from the vast amounts of data they analyze, they could pose serious, unexpected problems.

They fear that if companies embed LLMs into other Internet services, those systems could gain untold power by being able to write their own computer code. They say developers will create new risks by allowing powerful AI systems to run their own code.

“If you look at a three-year straight projection of where we are right now, things are pretty weird,” said Anthony Aguirre, a theoretical cosmologist and physicist at the University of California, Santa Cruz and co-founder of the Future of Life Institute.

“If you take a less likely scenario — where things really kick off, where there’s no real governance, where these systems turn out to be more powerful than we thought — then things get really, really crazy,” he said.

dr Etzioni said the talk of existential risks was hypothetical. But he said other risks – most notably disinformation – are no longer speculation.

“Now we have some real problems,” he said. “You are honest. They require a responsible response. They may require regulations and laws.”

#dangers #posed

Collectively induced transparency

Quantum Ghosts: Atoms become transparent to certain frequencies of light

supernova planet

Supernova shower: A deadly rain of X-rays threatens planets