Share

Man + machine. The true paradigm of artificial intelligence

Can Artificial Intelligence Start World War III Like Elon Musk Claims? That's why we can also be serene about the future

Man + machine. The true paradigm of artificial intelligence

Is it already Apocalypse now?

Elon Musk Said Artificial Intelligence Can Start World War III. If the manufacturer of the car with the most advanced and mercurial artificial intelligence on the planet says so, he means that there is a foundation. Until recently there were very few who cared about the consequences of technological innovation that was taking a furious step. And we were just at the beginning, we hadn't seen anything yet.

Then it happened that the discourse on the consequences of technology started to percolate into every part of society, except among technologists who are convinced that they are “on a mission for God”. Artificial intelligence is the main accused of this reversal of public perception.

When it comes to AI, a large portion of the public conversation tends to focus on more than just job losses or China gaining the upper hand. But also and above all on the fear that intelligent machines will one day conquer the world making man a mere link in the food chain. That's not what happens in the War of the Worlds by Huxley?

The implicit assumption is that humans and machines are in competition. A competition that will be won by the machine. Eventually intelligent systems, with their superior speed, processing power and resistance to wear and tear, will eventually replace us first in professions, then in organizations and finally in decisions.

There is a 2015 econometric study by the National Bureau of Economic Research, a research center that predicts economic trends with sufficient accuracy. This study drew this conclusion from research on the development of artificial intelligence:

"In the absence of an adequate fiscal policy that redistributes from winners to losers, intelligent cars will mean more poverty for all in the long run."

Two conditions that at the moment seem far from coming, if not even conceived. But one thing is happening: a significant part of the population in developed countries is, in fact, impoverished. And we know that impoverishment can have even more brutal consequences than artificial intelligence.

Additive intelligence

Let's try to see the matter from a different perspective. Let's ask ourselves. What if the man-machine operation of power were not subtractive, but additive? This is the perspective proposed by Paul Daugherty and James Wilson in their book finally translated into Italian human + machine. Rethinking work in the age of artificial intelligence, Guerini, 2019, p. 215 (also available in ebook in co-edition with goWare).

The work of Daugherty and Wilson is not theoretical or story-telling, but draws its considerations from the field experience acquired by the two authors. In fact, both hold primary responsibilities at Accenture. Daugherty is Chief Technology and Innovation Officer, overseeing Artificial Intelligence and R&D projects globally. Wilson heads the IT and Business Research department.

Accenture is the largest management consulting firm in the world. He breathes the same air as the companies, especially large ones, for which he works shoulder to shoulder. It is difficult to have an observatory with a better view of the area where innovation and changes are taking place.

Daugherty and Wilson conducted an observational analysis and case studies of 450 organizations in a sample of 1500 and identified a number of relevant phenomena that quantitative research has missed. One is the concept of "fusion skill": men and machines together form new types of work and professional experiences.

Precisely this fusion of knowledge and skills is the "ghost space". Ghost in the sense that it is absent from the polarizing debate on work that has pitted men against machines. And it is in this central phantom space that cutting-edge companies have reinvented their work processes, achieving extraordinary improvements in performance.

l re-skilling

In his preface to the book, Paolo Traverso, director of the FBK ICT Fondazione Bruno Kessler Center, sums up the thesis of the two authors very well. He writes:

The sense of the work is announced in the title: the future does not lie in machines per se, intelligent as they may be, it is not in pure industrial automation, even if pushed to the max to replace a high percentage of the most routine and low-creativity components of each profession. The future of society, but also of the market and business, is actually where machines and people work together, where trades but also business models will be substantially renewed. Artificial intelligence must not replace people, their skills, their creativity, but it must enhance them, it must increase them.

The fundamental lever for this to happen lies in what the authors call re-skilling, that is, in preparing millions of people of all ages to work with new technologies. A titanic but unavoidable undertaking.

Even a repentant technologist like Arianna Huffington, who now advocates disconnection, greatly appreciated the work of the two authors which ultimately goes in the direction of an absorption of technology within the human condition until it becomes an integral part of it. Here's how Huffington puts it about the book:

«In Human + Machine, Daugherty and Wilson provide a model of the future in which artificial intelligence enhances our human side. Filled with examples, instruction and inspiration, the book is a practical guide to understanding artificial intelligence—what it means to our lives and how we can make the most of it."

Below we provide two brief excerpts from the book by Daugherty and Wilson. The first is about a brief history of artificial intelligence. History always serves to understand. The second, however, gets to the heart of the matter by discussing the set of technology constellations that make up artificial intelligence today. In essence, artificial intelligence at this stage of development is basically deep learning. We can still control it.

Happy reading

. . .

A brief history of artificial intelligence

1956

The driving technology behind the current era of adaptive processes is artificial intelligence, which has evolved over the past two decades. Its history in brief will provide us with a context in which to frame its most advanced characteristics and potential.

The field of artificial intelligence was officially born in 1956, when a small group of computer scientists and researchers led by John McCarthy, and which included Claude Shannon, Marvin Minsky and others, met at Dartmouth College for the first conference dedicated to the possibility that machine intelligence can mimic human intelligence.

The conference, essentially a prolonged brainstorming session, was based on the assumption that every aspect of learning and creativity could be described in such a precise way that it could become a mathematical model, therefore replicated by machines. The goals were ambitious, starting from the design proposal: "Any attempt will be made to find out how to make a machine able to use language, form, abstractions and concepts, solve types of problems currently limited to human beings, and improve itself". Of course, that was only the beginning.

The conference was immediately successful in narrowing the field and unifying many of the mathematical ideas swirling around the concept of artificial intelligence.

The pioneers

And in the decades that followed, it inspired entirely new areas of research. For example, Minsky, together with Seymour Papert, wrote what is considered the foundational book on the limits and possibilities of neural networks, a type of artificial intelligence that uses biological neurons as a model. Other ideas such as expert systems - in which a computer is equipped with deep reserves of "knowledge" related to specific fields such as architecture or medical diagnostics - or natural language processing, computer vision and portable robotics, can also be traced at this event.

Among the conference attendees was Arthur Samuel, an IBM engineer who was building a computer program to play checkers. His program evaluated the state of a chessboard and calculated the chances that a given position could lead to victory.

In 1959, Samuel coined the expression machine-learning, «automatic learning»: that is, that research sector that attributes to computers the ability to learn without being explicitly programmed. In 1961 his machine-learning program was used to defeat the fourth largest checkers player in the United States.

But since Samuel was a private person and did not practice the politics of self-promotion, it was not until his retirement from IBM in 1966 that the importance of his work on machine learning became public knowledge.

Machine learning

In the decades that followed the conference, machine learning remained obscure as attention turned to other models of AI. In particular, research conducted in the XNUMXs and XNUMXs focused on a concept of intelligence based on physical symbols and manipulated by logical rules. These symbolic systems, however, did not find success in practice, and their failure led to a period known as "the winter of artificial intelligence."

In the 1990s, however, machine learning began to flourish again and its proponents adopted integrated statistics and probability theory in their approach. At the same time, the revolution of the personal computer began. Over the next decade, digital systems, sensors, the Internet, and cell phones would become commonplace, providing all sorts of data to machine-learning experts as they developed their adaptive systems.

Today we think of a machine-learning program as a dataset-based model builder that engineers and specialists use to train the system. It's a stark contrast to traditional computer programming. Standard algorithms followed predetermined paths set in motion by static instructions or by programmer code. A machine-learning system, on the other hand, can learn as it works. With each new set of data, it updates its models and how it "sees" the world. In an age where machines can learn and change through experience and information, programmers have become less and less legislators and dictators, and much more like teachers and coaches.

Today as today

Today, artificial intelligence systems that employ machine learning are everywhere. Banks use them to protect themselves from fraud; dating sites use them to suggest potential matches; marketers use them to predict who will respond favorably to an advertisement; and photo sharing sites use them for automatic face recognition. A long way has come since the first game of checkers. In 2016, Google AlphaGo marked a significant advance in the field. For the first time, a computer has beaten a champion of Go, a much more complex game than checkers or chess. As a sign of the times, AlphaGo has produced moves so unexpected that some observers have called them creative, even "beautiful."

The growth of artificial intelligence and machine learning has been intermittent over the years, but the way both have recently broken into products and business operations shows that they are more than ready for a starring role. According to Danny Lange, former head of machine learning at Uber, the technology has finally left the walls of research labs and is fast becoming "the cornerstone of this stormy new industrial transformation".

Smart technologies and applications: how can they coexist?

Here's a glossary of the constellation of ai technologies you need to be aware of today. These technologies correspond to machine-learning, artificial intelligence capabilities and application layers as per figure below.

Components of machine learning

— Machine learning (ML). The field of computer science that deals with algorithms that learn from data and from data make predictions without needing to be explicitly programmed. It's an area that has its roots in the research of IBM's Arthur Samuel, who coined the term in 1959 and used the principles of machine-learning in his work on computer games. Thanks to the explosion of data available to train algorithms, machine learning is currently being used in fields as diverse as sprawling as computer vision research, fraud investigation, price prediction, natural language processing, and even more.

— Supervised learning. A type of machine-learning in which pre-classified and selected data composed of exemplary inputs and desired outputs are presented to an algorithm. The goal of the algorithm is to learn the general rules that connect inputs to outputs and to use these rules to predict future events from the input data alone.

Unsupervised learning. The algorithm is not supplied with labels, leaving it alone to find structures and input models. Unsupervised learning can be an end in itself (in terms of discovering hidden patterns in the data) or aiming for something specific (eg, extracting relevant traits from the data). Unsupervised learning is less focused on output than supervised learning, and more focused on exploring input data and inferring hidden structures and unmarked data.

Semi-supervised learning. Use tagged and untagged data — usually plus seconds. Many researchers have found that the combination of the two data sets considerably increases the accuracy of the learning process.

— Reinforcement learning. This is a type of training in which an algorithm is assigned a specific goal, such as operating a mechanical arm or playing Go. Every move the algorithm makes is rewarded or punished. The feedback allows the algorithm to build the most efficient path to the goal.

neural network. A type of machine learning in which an algorithm, learning from observational data, processes information in a way similar to the human nervous system. In 1957, Frank Rosenblatt of Cornell University invented the first neural network, a simple, single-level architecture (known as a surface network).

Deep learning and subsets: deep neural networks (DNN), recurrent neural networks (rnn), and feedforward neural networks (FNN). Set of techniques for training a multilevel neural network. In the dnn the "perceived" data is processed through different levels; each level uses the outputs of the previous one as input. The rnn allows data to flow back and forth between levels, unlike fnn, where the data is one-way.

Intelligent skill components

— Predictive system. System that finds relationships between variables in historical datasets with related outcomes. The relationships are used to develop models, which in turn are used to predict future scenarios.

— Local search (optimization). A mathematical approach to problem solving that makes use of a large set of possible solutions. The algorithm looks for the optimal solution starting from a point in the series and iteratively and systematically moving to neighboring solutions until it finds the optimal one.

— Representation of knowledge. A field of artificial intelligence dedicated to representing information about the world in a form that the computer can use to perform tasks, such as making a medical diagnosis or holding a conversation with a person.

— Expert systems (inference). A system that uses sectoral knowledge (medicine, chemistry, law) combined with a rules-based engine that decides how that knowledge is applied. The system improves as new information is added or as rules are updated or increased.

— Computer vision. A field dedicated to teaching computers to identify, categorize and understand the content of images and videos, imitating and implementing human vision.

— Processing of audio signals. Machine-learning that can be used to analyze audio and other digital signals, especially in environments with high sound saturation. Applications include computational speech and audio and audiovisual processing.

Speech to text. Neural networks that convert audio signals into text signals in a variety of natural languages. Applications include translation, voice command and control, audio transcription and more.

— Natural language processing (NLP, natural language processing). A domain in which computers process human (natural) languages. Applications include speech recognition, machine translation, sentiment analysis.

AI application components

— Intelligent agents. Agents that interact with people through natural language. They can be used to implement human labor in customer service, human resources, internships and other areas of the business where FAQ template requests are handled.

— Collaborative robotics (cobots). Robots that operate at slower speeds and are equipped with sensors that allow safe interaction with human colleagues.

— Biometric, face and gesture recognition. Identify people, gestures, or trends in biometric measurements (stress, activity, etc.) for human-machine interaction, or identification and verification purposes.

— Intelligent automation. It transfers some tasks from man to machine to drastically change the traditional operation. Through the potential and abilities of machines (speed, amplitude, ability to circumvent complexity), these tools complement human work and expand it where possible.

— Recommendation systems. They provide recommendations based on subtle patterns identified over time by algorithms. They can be targeted at customers to suggest new products or used internally for strategic suggestions.

— Smart products. Intelligence is built into the design so that they can constantly evolve to meet and anticipate customer needs and preferences.

— Personalization. Analyze trends and patterns for customers and employees to optimize tools and products for individual users or customers.

— Recognition of text, speech, image and video. It interprets data from text, speech, images and video and creates associations that can be used to broaden analytic activities and enable advanced applications for interaction and vision.

- Augmented reality. Combine the power of AI with virtual, augmented and mixed reality technologies to add intelligence to training, maintenance and other tasks.



comments