Share

Artificial intelligence: new master of the world?

Medicine, transport, manufacturing industry: artificial intelligence will revolutionize our lives between risks and opportunities - Here is a preview from Stefano da Empoli's latest book "Intelligence
artificial: last call” (Bocconi).

Artificial intelligence: new master of the world?

We don't know if he's right Vladimir Putin to affirm that “whoever develops the best artificial intelligence will become the master of the world”. Indeed we hope not, given that in the free society in which we would like to continue to live, there should at most be successful businesses and satisfied citizen-consumers. However, it is difficult to find a single industry that artificial intelligence will not radically transform in the years and decades to come.

Think of the valuable contributions that AI can make to medicine, helping doctors to improve diagnoses, to predict the spread of diseases with much greater precision and timeliness and to personalize therapies. Same huge potential in transport sector, where AI makes driverless driving possible. or in themanufacturing industry, where it is radically transforming work in the factory, with the advent of new generation robots, increasingly sophisticated and capable of carrying out recurring tasks, designing production models, providing higher levels of quality. In services, AI allows companies to respond faster to the needs of end consumers, potentially before they even go to a store or click on an app to place an order.

According to many experts, the discontinuity of a transversal technology such as AI is completely comparable to that produced by the advent of the steam engine which allowed the first industrial revolution in England at the end of the eighteenth century; of electricity and the internal combustion engine (without forgetting oil and chemistry) which determined the second industrial revolution between the end of the 5th century and the beginning of the XNUMXth century; and of the computers that laid the foundation for the last cycle of rapid progress. Together and thanks to other digital technologies (IoT, XNUMXG, cloud, blockchain etc.), AI is starting a fourth revolution (industrial but not only, embracing all productive sectors).

Indeed, according to the economists Erik Brynjolfsson and Andrew McAfee, we can even speak of the second age of machines (thus skipping the two intermediate revolutions): if the industrial revolution of the late eighteenth century produced the first age of machines, making it possible to overcome steam engine invented by Watt the limits of strength based on muscles, human or animal, this second radical shift of technological and economic paradigm is allowing pass the Pillars of Hercules represented by the possibilities of the human brain entrusted to us by Mother Nature.

A far from new discipline, AI was born in the XNUMXs, but anticipated even earlier in the studies of many scientists, among whom the most famous at the time were mainly European, such as John von Neumann e Alan Turing. The first to use the expression was John McCarthy, a young American mathematician who in 1956 decided to organize a seminar on the subject at his university, Dartmouth College, in New Hampshire. In the request for funds addressed to the Rockefeller Foundation, the working group put in place by McCarthy prophetically affirmed that "an attempt will be made to find out how we can make machines use language, formulate abstractions and concepts, solve types of problems now reserved for human beings and improve themselves.

Less apt, and more likely a useful expedient to maximize the chances of success of the funding application, the prediction according to which "we think that significant progress could occur in one or more of these problems if a group of scientists worked together for a 'summer". More than fifty summers have passed since then before AI became a reality, in more and more applications. And a mere group of scientists certainly wasn't enough. Today annual investments in AI in the world amount to several tens of billions of dollars and everything suggests that they will rise again, and by a lot, in the next few years.

But what determined the acceleration towards the realization of the midsummer dream of 1956 were above all two factors preparatory to the investment boom experienced in recent years. First of all, computer performance has increased exponentially. Moore's Law, first formulated in 1965 by Gordon Moore, co-founder of Intel, indicates that computational power doubles every eighteen months. If today the existence of this relationship is questioned by many quarters, the obvious implications we have witnessed in a relatively short time span cannot be denied. For example, the same computational power that until recently was entrusted to huge calculators can now be enclosed in an object the size of a simple mobile phone. Or a PlayStation.

In 1996, ASCI Red, the result of a substantial investment by the US government, cost 55 million dollars, was the most powerful super-computer in the world, the first to exceed the 1 teraflop threshold, reaching the record of 1,8 teraflops over the following year, the same computing power equaled only nine years later by the PlayStation 3 from Sony. However, instead of occupying almost 200 square meters like ASCI Red, it could be placed on a small shelf and many tens of millions of copies were sold. Therefore, the exponential growth of computing power has dramatically multiplied the devices capable of performing extremely complex operations.

In parallel, the digitization process has made it possible to detect, transmit and process an enormous amount of data, thanks in particular to the increase in connectivity and the decrease in the price of sensors through which the collection of information from the outside world can take place. The stock of data archived globally follows its own Moore's Law, so much so that the units of measurement available to measure the total quantity are starting to run out.

These two factors, very high computational capacity and huge amounts of data available, have allowed the so-called machine learning, one of the fundamental components of AI that allows machines to learn on the basis of processed data, to be able to become intelligent in all respects. Finally realizing the expectations of young scientists gathered in New Hampshire over sixty years ago. And even starting to put a strain on the so-called "paradox of Moravian”, Canadian artificial intelligence scientist who in his 1988 book stated that “it is relatively easy to make computers perform at the level of an adult in an intelligence test or a game of chess, but speaking of perception or of mobility it is difficult or impossible to give them the ability of a one-year-old child”.

Therefore, Moravec's assumption - and herein lies the paradoxical aspect - is that even very sophisticated reasoning requires much less computing power than a sensorimotor activity. But, in the face of the enormous increase in computational capacity, combined with the improvement of machine learning techniques, the paradox noted by Moravec, while continuing to exist in part, is increasingly questioned. As evidenced by the increasingly sophisticated robots that companies use in their factories to move objects (this is what Amazon does with KIVA robots, a Boston startup acquired by the e-commerce company in 2012 for 775 million dollars) or the self-driving cars which at an experimental level have already been a consolidated reality for years.

comments