AI is as dangerous as the atomic bomb
At the Aspen Security Forum in the US, ex-Google boss Eric Schmidt sounded the alarm about the danger posed by AI.
if we talk about itartificial intelligence, we most often put his skills to the test. It is indeed capable of great things. However, there are those well-informed men who do not fail to warn against it the danger posed by such an invention for the man. The newest person is Eric Schmidt. The ex-boss of the Mountain View company took advantage of this Aspen Security Forum to share their concerns.
AI is the new atomic bomb
It was on August 6, 1945 that the United States let fall first atomic bomb on Hiroshima. 3 days later another bombing Nagasaki. Even today, these cities suffer from the effects of those pesky rockets.
This is how the former Google boss Éric Schmidt compares artificial intelligence. For him, It’s just as dangerous as these nuclear weapons. It would be capable of destroying human existence. That’s why he’s appealing world powers What are the China and the United States reach agreement on the issue.
“Eventually, in the 1950s and 1960s, we created a world where nuclear testing had a ‘no surprises’ rule. […] You have to start building a system where because you arm yourself or get ready, you trigger the same thing as the other side. We don’t have anyone working on it, and yet the AI is so powerful.”said Eric Schmidt.
So it is important to take action.
SEE ALSO: For the first time, an artificial intelligence is recognized as the owner of a patent
Artificial intelligence must be regulated
To represent even more the danger that looms over humanity with artificial intelligence, recommends Eric Schmidt a deterrent treaty as it exists today between the states that own them weapons of mass destruction. This contract was born the day after Second World War, while the largest countries in the world have begun to equip themselves with nuclear weapons. This international agreement now prohibits any state from conducting nuclear tests without first warning other states.
The ex-Google boss then says that such an agreement should come about framework of the AI. In fact, it may one day prove to be a danger to all humans. This statement also confirms the reasons why the Man of Tech started in February 2022 the AI2050 fund. This should make financing possible “Research on ‘hard problems’ in artificial intelligence”. In particular, the excesses of artificial intelligence, the programming bias of the algorithms, and the geopolitical conflicts are considered as problems.
However, this isn’t the first time such a warning has been issued about a’s artificial intelligence Google workers. Back in 2018, Sundar Pichai, the current head of Google, voiced his concerns.
“Advances in artificial intelligence are still in their infancy, but I consider it to be the most profound technology humanity will work on and we must ensure that we use it for the benefit of society (…) Fire kills people too. We have learned to control it for the good of mankind, but we have also learned to control its evils.” he claimed.
We may not be at that level yet, but the omens are there. Actually a few days ago Blake Lemoinea Google employee reveals this the artificial intelligence he was working on became conscious. Blake was fired a few days later.