Opacity, neutrality, stupidity: Challenges for Artificial Intelligence algorithms

Opacity, neutrality, stupidity: Challenges for Artificial Intelligence algorithms

Professor Marcello Pelillo, Director of the European Centre for Living Technology, discusses some of the major socio-ethical challenges for Artificial Intelligence algorithms and research.

NO one at the beginning of the 2000s could have predicted what is happening today in the field of Artificial Intelligence (AI). No one would have bet that in such a short span of time, Artificial Intelligence algorithms would become so pervasive and would be at work in a wide range of activities of our daily lives, from driving our car to unlocking our smartphone, to getting recommendations for our next purchase. Indeed, Artificial Intelligence algorithms are all around us, even if we don’t know about it; and it is here to stay.

The reason for this astonishing progress lies not so much in the invention of brand-new sophisticated artificial intelligence algorithms, but rather in the sheer availability of huge amounts of data and a formidable computational power. This allows us to apply time-honoured machine learning models loosely inspired by the functioning of the brain which, as it turns out, work remarkably well. These are now widely known as ‘deep neural networks’.

The widespread adoption of data-driven Artificial Intelligence algorithms in virtually all sectors of society, however, is posing new challenges that we need to address urgently, some of which are briefly discussed here.

We should, in fact, not forget the lesson of Norbert Wiener, one the founding fathers of Artificial Intelligence, who as early as the 1950s warned us thus: “Any machine constructed for the purpose of making decisions, if it does not possess the power of learning, will be completely literal-minded. Woe to us if we let it decide our conduct, unless we have previously examined its laws of action, and know fully that its conduct will be carried out on principles acceptable to us!”

Opacity

Unfortunately, there is a well-known trade-off between the accuracy of Artificial Intelligence algorithms and their transparency, the most powerful ones being in fact difficult or impossible to interpret. This is especially true for deep neural networks, today’s most popular models, whose outputs are determined by millions of finely-tuned parameters and are therefore black boxes for us.

When this issue is brought to the attention of Artificial Intelligence algorithms researchers, there are two typical reactions. On the one hand, there are those who quickly dismiss it, pointing to the fact that we constantly make use of devices and machines whose behaviour is mostly unintelligible to us and yet we see no reason not to use them. Examples abound and include pocket calculators, smartphones, TVs, laptops, elevators, cars, airplanes, etc., not to talk of our own brains.

On the other hand, there are those who take this issue seriously and think that, in order for Artificial Intelligence to be deployable in areas that might have a significant impact on one’s life, predictive accuracy is not enough and should be complemented with a notion of explainability. Accordingly, an understanding of the inner workings of AI models becomes necessary, not only to make them more easily acceptable by the end users but also to be able to properly intervene in the case of malfunction.

For example, explanation is a core aspect of due process. Judges generally provide either written or oral explanations of their decisions, administrative rule-making requires that agencies respond to comments on proposed rules, and agency adjudicators must provide reasons for their decisions to facilitate judicial review. Similar principles apply, for example in the context of medical applications. More importantly, the General Data Protection Regulation (GDPR), which came into effect on 25 May, 2018, requires that in automated individual decision-making processes a data subject has the right to obtain ‘meaningful information about the logic involved.’

Although the precise scope of this ‘right to explanation’ is a matter of ongoing debate, it is clear that the opacity of current Artificial Intelligence algorithms is felt to be a major impediment to their deployment in a number of sensitive application areas.

Neutrality

One might think that machines are by definition neutral and cannot be subject to human prejudices. Hence, it makes perfect sense to one day replace humans with machines in delicate decision-making processes so as to avoid any form of social discrimination. In fact, machine learning algorithms are currently being used to inform judges’ decisions about bail and sentencing, and are becoming a popular assistive tool in recruiting and hiring processes. However, we should remember Kranzberg’s first law of technology, which says that ‘technology is neither good nor bad; nor is it neutral.’ In fact, this cannot be truer than in machine learning-based technologies.

The reason why this is the case is simple: a machine learning algorithm gets its knowledge from data, and if data are somehow biased then the decisions made by the algorithm will be biased as well. A recent study published by ProPublica, an American non-profit organisation devoted to investigative journalism, showed that an algorithm used by US courts to predict future criminals is clearly biased against Afro-American people. Another oft-quoted example of how algorithms can be biased is Microsoft’s Tay software, a Twitter chatbot which, launched on 23 March, 2016, was shut down after only 16 hours of operations due to the inflammatory and highly offensive tweets it started posting. The algorithm apparently absorbed very quickly all sort of social prejudices of the people it interacted with.

There are several sources of potential social discrimination for machine learning algorithms. These include, for example, the social biases of the people collecting the training data, the sample size disparity, and the features used for making decisions. Although one cannot expect to eliminate machine bias altogether, a number of researchers are now working towards reducing it as much as possible.

Stupidity

In an ironic short essay originally circulated among friends in the mid-1970s and then finally published in English in 2011, the Italian economist Carlo M. Cipolla postulated five ‘basic laws of human stupidity.’ The third (and golden) law affirms that ‘a stupid person is a person who causes losses to another person or to a group of persons while himself deriving no gain and even possibly incurring losses.’ Although originally conceived to characterise human behaviour, Cipolla’s laws can be adapted to machines as well. Indeed, some recent research work points to the conclusion that today’s most powerful AI models are easily fooled by an ‘adversarial’ agent, and might in fact be stupid in Cipolla’s sense.

To illustrate the point, a group of computer vision researchers took an image containing, say, a school bus and gave it as input to the best-performing image classification Artificial Intelligence algorithm around (not surprisingly, a deep neural network). As expected, the algorithm responded correctly and heralded the presence of a school bus. Then, they corrupted the school bus image by properly modifying the colours of some of its pixels to make a new image which, to a human eye, was indistinguishable from the original one. When fed with this new corrupted image, the same neural network used before announced, with very high confidence, the presence of an ostrich! One can easily imagine the deplorable consequences of this kind of mistakes in real-world applications such as autonomous driving or biometrics, a fact that reminds us of Cipolla’s fifth law, which states that ‘a stupid person is the most dangerous type of person.’ Safety will be therefore another important challenge for AI researchers in the years to come, and has, in fact, become quickly an active research area.

A European ethical observatory for Artificial Intelligence

The European Centre for Living Technology (ECLT) is an interdisciplinary research centre, based in central Venice, Italy, whose mission is to study and develop new technologies embodying the essential properties of life and complex systems, such as self-organisation, evolution, learning, adaptability, and perception. The centre, established in 2004, is organised as an inter-university consortium, currently involving 17 institutional affiliates mainly, but not only, across Europe.

Among other research areas, at ECLT we are deeply committed to studying the socio-ethical implications of AI technologies. In particular, we are proud to be part of a large H2020-funded project called ‘AI4EU’, starting in January 2019, which aims at building a comprehensive European AI-on-demand platform that will provide actors in all sectors of society with access to expertise, algorithms and tools for deploying AI-based innovations.

One of the strategic objectives of AI4EU is to promote European values for ethical, legal and socio-economic issues in AI, and to protect European society from abuses of AI technologies. To this end, at ECLT we shall set up an ethical observatory to ensure the respect of human-centred AI values and European regulations, and to provide the AI community and the European authorities with up-to-date information regarding the consequences of uses and misuses of AI. We will propose a legal framework for safe and transparent AI, combat unintended biases leading to social discrimination, and ensure that social and economic concerns over job loss or potential dominance by machines are properly addressed. In short, we are determined to keep Wiener’s visionary lesson alive before it is too late.

Professor Marcello Pelillo
Director
European Centre for
Living Technology
Ca’ Foscari University of Venice
+39 041 2347588
pelillo@unive.it
www.ecltech.org

Laboratory Supplies Directory - Now Live

LEAVE A REPLY

Please enter your comment!
Please enter your name here