Share

Artificial intelligence between space travel and electric cars: the future to come

The progress made to date in artificial intelligence is staggering. They are as much in the results achieved as in the speed with which they are achieved – But we could have an AI that is much, much brighter than the smartest man on earth. But perhaps this would be a dangerous situation.

Views on the future

Speaking at the recent World Government Summit held in Dubai, to promote his company's electric cars, Elon Musk did not fail to address disparate issues, illustrating what he believes are the issues and challenges that humanity will have to face and overcome from here shortly. The millionaire of South African origin, as usual, ranged across the board: from the unknowns deriving from the development of artificial intelligence and the possible social repercussions, to mobility, to interplanetary and, why not, even interstellar travel, up to outlining possible scenarios for the near future, without missing, however, some ideas also projected towards a medium and long-term horizon.

Considered by many to be a visionary, an anticipator of what will be the reality of tomorrow, a pioneer of space travel (SpaceX), a promoter of electric mobility (Tesla), a recent proponent of eco-sustainable energy (Solar City), so ambitious, as wealthy as he is, Musk says he seeks solutions that can benefit humanity. Although there is no shortage of detractors and prudence is a must, one cannot deny him a brilliant intelligence with a constantly forward-looking gaze and, whether one likes it or not, some role in imagining and shaping the fate of the world that it will come.

It is therefore worth listening to what he has to tell us and then, possibly, reflecting and questioning the topics raised, which are anything but trivial. Here I will deepen some of the topics related to artificial intelligence and I will do so starting from some sentences extrapolated from Musk's speech.

One of the more troubling issues is artificial intelligence…deep artificial intelligence, or what is sometimes referred to as general artificial intelligence, where we might have AI that is much, much smarter than smartest man on earth. I think this would be a dangerous situation.

The progress made to date in artificial intelligence is staggering. They are as much for the results achieved as for the speed with which they are achieved. An apparently unstoppable and extremely rapid gait, which accelerates more and more rather than slowing down. In fact it tends to follow an exponential trend, in accordance with what Moore's law prescribes in relation to the increase of transistors in microprocessors. A rhythm that actually hides pitfalls for our mind.

The singularity awaits us in 2047, word of Masayoshi Son

The fact is, explain Erik Brynjolfsson and Andrew McAfee (The Second Machine Age, 2014), that the progress in the field of artificial intelligence, although constant from a mathematical point of view, does not appear ordered to our eyes. The two authors illustrate it by borrowing a phrase from Hemingway referring to the spiral that leads a man to ruin: «gradual and finally sudden».

This means that the exponential progression records a gradual growth, initially almost negligible, up to a point where, apparently, a sudden acceleration occurs and the quantities become immeasurable, even unimaginable and therefore not manageable at all.

In other words, continuing at this rate there would be an acute discrepancy between the effective computing power of the machines (according to some capable of evolving to the point of allowing them to improve themselves autonomously and possibly develop their own self-awareness) and the capacity of the man to conceive it, contain it, predict it and then control it. This moment of profound caesura takes the name of Singularity. Although the singularity still represents a conjecture and does not collect unanimous consensus, it describes an eventuality that appears increasingly concrete and unfortunately close.

Masayoshi Son, CEO of Softbank Robotics, speaking at the recent Mobile World Congress held in Barcelona, ​​stated that within thirty years the IQ enclosed in a single microprocessor will be far higher than that of the smartest among us. “Any chip in our shoes thirty years from now will be smarter than our brains. We'll be worth less than our shoes."

Son bases his prediction on a comparison between the number of neurons in our brain and the number of transistors on a chip. According to his calculations, in 2018 the transistors will operate the fateful overtaking and the figures will begin to diverge. In a relatively short time, individual microprocessors will acquire an IQ estimated at around 10.000, on the other hand the most brilliant minds in the history of humanity hardly reach 200. We will therefore have to measure ourselves against what Son defines as "Superintelligence". “That is, an intelligence that is beyond people's imagination [no matter] how intelligent one is. Nonetheless I am convinced that within thirty years all this will become a reality». Therefore we also have a date for the advent of the singularity: 2047.

Are we therefore close and destined to succumb? Son says he is optimistic and confident in a future where man and machines can coexist and collaborate. “I believe this superintelligence will become our partner. If we abuse it, it will pose a risk. If we use it following honest intentions [good spirits] it will be our companion in a better life».

I think we need to pay close attention to how artificial intelligence is being adopted … Therefore I think it is important for public safety that we need a government that closely monitors artificial intelligence and makes sure that it does not pose a danger to the people .

The singularity is a very serious threat to man

Google recently announced the results of a research conducted on the AI ​​developed by its DeepMind, which proved to be able to overcome with AlphaGo, learning game after game (deep learning), first the European Go champion and finally the world one. Although these are preliminary results and no definitive study has yet been published, the evidence shows that advanced AI would be able to adapt and learn from the environment in which it operates. Furthermore, when it is cornered and risks succumbing, it chooses to use strategies defined as "extremely aggressive" to be able to win it. "Researchers suggest that the more intelligent the agent, the more capable he is of learning from his environment, thus being able to use some extremely aggressive tactics to come out on top."

Several authoritative voices have expressed their fears that a particularly advanced AI could pose a very concrete threat. Among these is that of Stephen Hawking, who believes that even the continuity of the species may be at risk: "The development of full artificial intelligence could mean the end of the human race".

Hawking, Musk and other important personalities, such as eg. Steve Wozniak and Noam Chomsky, have signed an open letter, warning about the risks inherent in the development of autonomous weapon systems and asking for their ban by the UN. «Artificial intelligence technology has reached a level where the development of autonomous weapons is – de facto if not legal – a matter of years, not decades. And the stakes are high: autonomous weapons have been described as the third revolution in armaments, after gunpowder and nuclear weapons.

The sophisticated AI developed by DeepMind has shown not only that it knows how to be aggressive in order to prevail, but also to recognize and implement, if it proves useful and necessary, cooperative strategies with other artificial intelligences. «… the message is clear, we pit different AI systems against competing interests in real-life situations, and all-out war could ensue if their goals are not balanced against the ultimate goal of benefiting above everything else we human beings". The enormous complexity determined by an artificial intelligence made up of innumerable interconnected networks constitutes, in itself, a challenge that could prove to be far beyond man's ability to govern it.

The side effects of AI

However, even before a Superintelligence proves lethal on the battlefield or even decides to turn against humanity, like the Skynet supercomputer in The Terminator, other, further dangers exist. Some time ago Musk had already warned of possible side effects, fatal or in any case unpleasant that could arise in situations and for reasons, if you like, much more trivial. We must therefore be extremely careful and cautious when programming our smart devices. Poorly programmed AI, says Musk with hyperbole, “could conclude that all unhappy humans should be eliminated. …Or that we should all be caught and treated with dopamine and serotonin injected directly into the brain to maximize happiness, because she came to the conclusion that it is dopamine and serotonin that induce happiness, so it boosts it to the greatest degree.” Once again, the more the complexity of intelligent systems grows and the latter's ability to connect and network increases, the greater becomes the difficulty of managing and predicting the effects of their work.

Recently a large group of scientists, researchers and entrepreneurs (at the moment 3441), signed an open letter drawn up on the occasion of the 2017 Asilomar conference, of the Future of Life Institute, with the aim of indicating a set of guidelines, including ethical ones, which they should inform research in the field of artificial intelligence. The Asilomar AI Principles, in twenty-three points, "range from research strategies to data protection, to future issues, including a possible super-intelligence". The goal, once again, is to try to direct the progress of AI towards the common interest and ensure a beneficial future for all of humanity. “I am not a proponent of war, and I think it could be extremely dangerous… I obviously believe that technology has enormous potential, and even with just the capabilities we possess today, it's not hard to imagine how it could be used in particularly harmful ways.” said Stefano Ermon, of the Department of Computer Science at Stanford University, one of the signatories of the document.

Stephen Hawking, also a signatory of the Asilomar AI Principles, was the author of a lucid and heartfelt article which appeared last December in “The Guardian”, with the significant title: This is the most dangerous era for our planet. The well-known astrophysicist underlines how humanity will have to contend with enormous social and economic changes. The effects induced by globalization, the increase in inequalities and the concentration of wealth and resources in the hands of a few, will be added «… the acceleration of technological transformation». As if that weren't enough, "We are facing staggering environmental challenges: climate change, food production, overpopulation, the decimation of other species, epidemics, ocean acidification."

All these evidences constitute both a warning and an impending threat. The consequences are clear to all: «Together, they remind us that we are in the most dangerous moment for the development of humanity. We currently have the technology to destroy the planet we live on, but we have not yet matured the possibility of abandoning it."

Therefore, concludes Hawking, "For me, the really relevant aspect of all this is that today, more than at any other time in our history, our species has a need to work together". Cooperate, therefore, collaborate, take care of those who have remained and, even more, who will remain behind, reduce inequalities, unite rather than divide, share and work for the common good, not in the interest of a few. The advancement in artificial intelligence will play a major role in this scenario. It will be able to exacerbate imbalances and inequalities, implode society as we know it today, or help to smooth out conflicts and differences.

Learn from history

Masayoshi Son's words return regarding the future of AI: «If we abuse it, it will constitute a risk. If we use it following honest intentions [good spirits] it will be our companion in a better life». Hawking, despite everything, is confident: «We can do it, I am enormously optimistic for my species; but all of this will require elites, from London to Harvard, Cambridge to Hollywood, to learn the lessons of the past year. First of all, let them learn a pinch of humility». If we look back at the past, the elites have hardly ever shone for farsightedness and even less for humility. However, the transformations underway and the ultimate potential risk of an eventual extinction require a change of direction which is also in the interest of the few who benefit from the status quo. There is no doubt that the elites are aware of it, but the real question is: are they also intimately aware and convinced of it? After all, the lesson that comes to us from history urges us not to remain indolent and linger because, as Hawking himself ruefully acknowledges, «… let's face it, it's mostly the story of stupidity».

comments