4 min read

Where Artificial Intelligence Went Wrong: Top Scientists’ Warnings.

Risks of Superintelligence.

No far-fetched, but within the last several years, top scientists and industry leaders have warned of the catastrophes if machines take over the power to think.

With the convergence of Megatrends on massive data, lightning fast processing speeds, and renewed competitive fever from the MAFIA (Microsoft, Alphabet, Facebook, IBM, Amazon), AI is poised to disrupt on a scale that could surpass the Internet itself.

In 2015 Tesla’s Elon Musk, Apple co-founder Steve Wozniak, Google DeepMind chief executive Demis Hassabis, Google director of research Peter Norvig, Microsoft managing director Eric Horvitz, professor Stephen Hawking along with 1.000 AI experts have signed an open letter warning of a “military artificial intelligence arms race” and calling for a ban on “offensive autonomous weapons”.

We suggest you look at the top scientists’ warnings.

John McCarthy

Farther of AI, who coined the term in 1955 and defined it
as the science of making intelligent machines.

As soon as it works, no one calls it AI anymore.
F
ive areas can be identified as risky: data risk, autonomous weapons, social media manipulation, algorithms, human-machine interactions.

Stephen Hawking

Theoretical physicist, cosmologist, author.

The development of full artificial intelligence could spell the end of the human race. It would take off on its own, and re-design itself at an ever-increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete and would be superseded. The real risk with AI isn’t malice but competence. A super-intelligent AI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours, we’re in trouble.

Noam Chomsky

Founder of modern linguistics, journalist, anti-war activist.

The “new AI” — focused on using statistical learning techniques to better mine and predict data— is unlikely to yield general principles about the nature of intelligent beings or cognition. The adoption of approach reminiscent of behaviorism, except in a more modern, computationally sophisticated form, is not right.

The urge to gather more data is irresistible, though it’s not always clear what theoretical framework these data might fit into. These debates raise an old and general question in the philosophy of science: What makes a satisfying scientific theory or explanation, and how ought success be defined for science?

Raymond Kurzweil

Inventor, futurist.

A human-level or AI could help find solutions to global problems such as climate change, disease, or even mortality. But the level of power still isn’t well-balanced between centralized companies and a decentralized populace, especially as companies like Facebook, Amazon, and Google.

Nick Bostrom

Oxford professor, transhumanist.

AI is superintelligence — the intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills. I see the superintelligence hijacking political processes, subtly manipulating financial markets, biasing information flows, or hacking human-made weapons systems.

Elon Musk

SpaceX & Tesla.

The existence of superintellect would be an existential risk – an innovation which could lead to our extinction if, for example, our existence got in the way of the AI’s pursuit of its goals, which at a certain point would be beyond our control.

Martin Ford

Futurist, NYT Bestselling Author.

The vulnerability of interconnected, autonomous systems to cyber-attack or hacking, the susceptibility of machine learning algorithms to bias, in some cases on the basis of race or gender, the impact on privacy E.g. China is currently building a very scary Orwellian surveillance state using AI, especially facial recognition. If we are not careful, this may also impact the West.

So what is the line on this?

Finally, we need to accept that AI will surpass human intelligence. Machine and deep learning have moved over the last few years much faster than people anticipated. But we also need to understand that there is no singular, tremendous, and dreadful AI being controlled by a singular power. Instead, there are millions of different AIs being controlled by anyone who owns a smartphone.

Our time can be called a transformation phenomenon. The development of the Internet, the deep learning, and AI boom incorporate the outcomes of high-volume brain science and overcome most of the technical barriers. Therefore, we can have positive expectations for future developments.