Share

Artificial Intelligence tests emotions: save whoever can

In Hong Kong, artificial intelligence software has been installed in 84 elementary schools which, through facial recognition, attempts to analyze emotional states, but there is no reason to rejoice as Shira Ovide explains in the New York Times

Artificial Intelligence tests emotions: save whoever can

Well China

China doesn't give me any particular problems. I love Chinese cuisine: they are the best at cooking vegetables. Their tech industry is fabulous. The Chinese buy a lot of iPhones and the Apple stock rises.

I believe that China's aspiration to see its rightful place in the world recognized is right. You have a longer history than ours. And I am also convinced that there are well-founded historical reasons which lead the Chinese not to trust Westerners.

As the young Marx wrote about Rheinische Zeitung, the Chinese will never forgive Westerners for what they did during the opium wars in the era of imperialism. It was truly heinous. I say, but can we, with the excuse of progress, export a drug with cannon diplomacy and impose its use for commercial purposes? This is what the British and their allies did in China between 1840 and 1860.

Difficult to forget especially for a proud people like China. And in fact it shows, it is China that now wants world leadership.

Bad China

Now, however, China is starting to give me some thought. Maybe he's going too far. It is singular because the center of gravity of its culture is measure. Is it crossing a Rubicon?

This step is represented by facial recognition technology of which I mention only one episode that seems to me emblematic.

In Hong Kong (which is already China) 84 primary schools have installed an artificial intelligence software (let's say, to simplify) called 4 Little Trees (Four Saplings). Even the name is embarrassing, it looks like a Little Red Riding Hood in “salsa algo”.

Four saplings planted

These Quattro Alberelli, through the camera of any computer, analyze the emotional states expressed by the faces of adolescents and pre-adolescents who follow the lessons from home.

For what purpose? The stated aim is to give teachers additional information on pupils' performance, concentration and motivation.

I could not figure out if the kids know they are being watched and analyzed or if they have been informed about what is happening. I think 90% of them don't know it. But I think 100% of parents know that.

It will be said, what is strange about it? Ultimately the same happens in the classroom. A good teacher who works seriously with the emotional states of the children he has in front of him improves his work.

Agree. But do we want to put an algorithm idiot with the person who does the most important job in the world?

What good is it?

I say, but how can you be so clueless - just to be lenient with this initiative - given the ambition of the objective and the paucity of the means to achieve it.

How can software, albeit sophisticated, that uses a camera out of any context correctly interpret the emotional situation and facial expressions of a twelve-year-old from Hong Kong? First point. He does it very roughly, even though the project managers say he gets it right 80% of the time. But let's assume that's the case.

The elaborations that the software produces, stored in the cloud in the form of a report, I imagine, are only the teacher's property or can someone else also put their bill in it? Who can be sure that the system is not accessed by some nosy apparatus? It is not such an unlikely hypothesis, because in Hong Kong, as we know, everything is going on and in fact the citizens of the big city find themselves living in a police state.

Ultimately we are talking about teenagers, critical ages, emotions and future generations. The stakes are high.

The Financial Times

The fable of the Four Small Trees has access to the imagination of the editorial staff of Financial Times who dedicated an entire page of the newspaper entitled "The AI ​​tools that try to read your mind" to the issue of facial recognition and its effects.

And he then returned to the same issue of emotional AI with an editorial by his director. Now at the head of the most enlightened organ of advanced capitalism (Pajetta would say) (I would add) is the Anglo-Lebanese journalist Roula Khalaf, the first woman to hold this position in 150 years of history. Is good Financial Times!

The editor of the London newspaper vigorously calls for the utmost caution: "Computers are not the best thing to judge emotions". But yeah, it doesn't take a Columbia University degree to say that.

Also Shira Ovide, who edits and writes the newsletter of New York Times on technology, an even more detailed question was asked "Should Alexa read our moods?”. Here is his answer.

Do we have to worry?

In fact, facial recognition technology is starting to scare not only the staff of the Financial Times. As long as it comes to unlocking the iPhone, now even with a mask (but you need an AppleWatch, did you think!) or paying for a cappuccino, okay.

But other applications of the FacialRecognitionSystem, especially those that can be used by deep state apparatuses and organs of social control, begin to generate nightmares.

Exaggerated? Courage, perhaps, is not so serious.

The complexity of the metalanguage

Let's leave technology aside for a moment and the possible non-edifying use of algorithmic recognition of facial expressions for emotional tracking purposes.

I wonder: is it really possible to interpret in an unambiguous way, and in such a way as to construct a psychology, the meaning of a mimicry or the emotional charge of a set of facial expressions in a weak context such as that detected by a camera in the context of a relationship distance, as is the case in Hong Kong?

With the current state of technology it is questionable. If only because of the paucity of current technology and the grandeur of the task.

As rightly pointed out by the Financial Times each culture has its own facial expressions, each community refers to a specific system of emotional signs, each tribe has its own body language. It is the context and the reasoning that allows them to be interpreted in a meaningful way. Two properties that current software cannot reproduce.

Metalanguage is such a complex matter that it escapes even the classifications of humans, let alone a computer.

Wittgenstein and Sraffa

The entire logical-philosophical construct of the early Wittgenstein was shattered in the face of the great thinker's inability to bring within his scheme a malicious gesture hinted to him by Sraffa during a walk in Cambridge. Wittgenstein was as mercilessly strict with himself and his thoughts as Sraffa was sophisticated and detached (see the relationship with Gramsci's legacy well investigated by Salvatore Sechi).

It is probably an urban legend, but the fact is that at a certain point Wittgenstein's point of view on the world suddenly changed as he honestly admits in the Introduction to Philosophical researches.

One thing that can comfort us is that algorithms too, like the Viennese thinker, are rigorous and coherent, bordering on insanity. So they won't bluff if they don't come to have that mischief.

The indifferent

How would a system of algorithms evaluate the emotional charge of a figure like that of the little employee Meursault from the facial expressions during the trial, the sentence and the waiting for the execution (The stranger by Luchino Visconti, 1967, on YouTube free). Not even he knew the reason for what he had done. He was a stranger even to himself.

The same goes for Michel in Pickpocket by Robert Bresson (1959, su Mubi, free 1 week). A character who is there and who isn't.

How to decode Paul Newman's (glacial) face in Cold hand nick (1967, on YouTube, rental) or in the (teaseful) Bully )1961, up Chile for hire) or that of Mr. Ripley (swindler) in the Mr. Ripley's Talent by Patricia Highsmith, brought to the big screen by Anthony Minghella (1999, on YouTube, rental)?

Musk and Pisa

Here we need brain sensors like the ones Neuralank is developing, one of the moon shot di Elon Musk. This is something quite different from the Four Saplings facial silhouette database!

And he's not the only one to try, the irrepressible Musk.

Some researchers at the Piaggio Research Center of the University of Pisa have developed an emotive robot called Abel, similar in appearance to a 12-year-old teenager. He also spoke about it TG1 a few days ago in prime time in a generous service.

The face of this animated mannequin assumes different mimic states in relation to the emotional state of the human bystander, whose bodily expressions it tries to interpret. It looks like a big toy, but the underlying hardware and software technology is very advanced.

That the guys from Pisa do better than the guys from Musk? Surely!

The AI ​​of the future

Perhaps the artificial intelligence systems of the technological post singularity will be up to the task of emotionally profiling people, as the series shows us well neXt on Disney +, unfortunately canceled due to lack of spectators, but unfortunately a forerunner already starting from its assumption "We can all be hacked".

Nowadays it is good that facial recognition remains only a "cool" to unlock the iPhone or pay for a brioche, because a different use would only give rise to misunderstandings, errors, abuses and other questionable misconduct attributable to the crudest capitalism of the surveillance, both private and state.

May the Quattro Alberelli remain only the promise of a beautiful fairy tale! For now we only need that. And let the guys from Pisa work who are less scary than the Chinese trees and Elon Musk's skin sensors (for now only pigs).

comments