SciTech Europa carries some of the comments from the ‘Artificial Intelligence: what are the impacts on our society?’ panel discussion at the Science Meets Parliaments conference at the European Parliament in February.
SciTech Europa attended the Science Meets Parliaments conference at the European Parliament, Brussels in February. The panel discussion, ‘Artificial Intelligence: what are the impacts on our society?’ addressed the question of how artificial intelligence will affect European jobs and human rights, amongst other areas.
The panel of speakers saw Alessandro Annoni from the European Commission’s Joint Research Centre (JRC), setting the scene, Ashley Fox, MEP and the European Parliament rapporteur on a Comprehensive European industrial policy on artificial intelligence and robotics, Catelijne Muller, Chair of the Study group on artificial intelligence, and Mady Delvaux, MEP and European Parliament rapporteur on Civil Rules on robotics, take to the stage.
How should we define Artificial Intelligence?
Alessandro Annoni addressed the conference with a general discussion of the “strong political tensions” that were arising from artificial intelligence. Defining the technology, Annoni said: “Artificial intelligence should not be considered a simple technology…it is a collection of technologies. It is a new paradigm that is aiming to give more power to the machine. It’s a technology that will replace humans in some cases.”
He identified the main areas of opportunity and challenge within the Artificial Intelligence sector as:
• Digitisation – he believes that artificial intelligence is “empowering citizens to collect data in digital form.” However, this also means that the amount of data being collected is increasing exponentially and our capacity as humans to process this data is challenged
• Cybersecurity – there is a need to protect personal data while simultaneously respecting the need for data collection, in Annoni’s view
• Interconnectivity – “We are connected, everything is connected today,” Annoni added
Annoni argued that since machines are programmed by humans, and are imperfect, the main considerations about artificial intelligence should be around ethics and cybersecurity. The future of AI should be “ethical by design, secure by design.”
His conclusions were that while “at the moment it is very difficult to see if there will be a ** positive/negative effect on the number of jobs…We should not forget that artificial intelligence should be used to increase equality, not to increase inequality, so there are specific actions that should be taken.” He advocated collaborating at the European level to shape the unwritten future of artificial intelligence.
Will artificial intelligence eradicate whole classes of European jobs?
Ashley Fox, the leader of the Conservative delegation, MEP, and European Parliament rapporteur on a Comprehensive European industrial policy on artificial intelligence and robotics, spoke on the implications for European jobs.
Fox said: “I think artificial intelligence and robotics has the potential not just to change our lives but to improve them. I am an optimist. I like technology, I think we should embrace innovation, and that is why I set up the all-party innovation group, which I chair. And I think where we sit in relation to AI and robotics now is similar to our ancestors 300 years ago as the age of steam was developing. This is a relatively new technology, and people are finding different ways to use it. And there were some countries that embraced it wholeheartedly, and that caused the Industrial Revolution. And the countries that embraced that new technology prospered, and their citizens got a lot better off.”
He added: “So, in relation to artificial intelligence and robotics, this new technology is happening and will happen, and the European Union has a choice: we can either embrace it wholeheartedly, or if we don’t, if we are defensive, if we impose all sorts of regulations that make it impossible to invest or develop, then that technology will happen without us. And it will happen in China and the United States, and our citizens will be poorer. So, I believe it is essential that all countries in the world should embrace this new technology, and undoubtedly this will destroy lots of jobs. But that’s not something that we should be fearful about. ”
Is the loss of some classes of European jobs a positive thing?
Fox cited the example of the Industrial Revolution as an analogy for how Europe should see the loss of jobs due to artificial intelligence. Speaking about the thousands of people employed in breeding horses and stagecoaches before the advent of the steam engine, he said: “Within a hundred years, all those jobs had been lost, and almost all people travelled by train…It was a great thing and people’s lives got better. So there are whole classes of employment now which in 100 years’ time will not exist, and that is a good thing. ”
He addressed his audience at the European Parliament by saying: “This place, the European Union and the European Parliament, is always looking backwards. ‘Oh dear, we must protect these jobs.’ No, we must not. We must be joyous as they are swept away. We should embrace the fact that repetitive or possibly dangerous jobs are done away with and new jobs are created. I think this has the potential to make us all a lot wealthier. And a number of people employed in agriculture, in manufacturing, in transport, will diminish. And the number of people employed in the service sector, in leisure, will increase. We’ll have more leisure time, and we’ll be wealthier. ”
Expanding on the challenges he perceives, he continued: “What is the key to ensuring that as many of our citizens [as possible] engage in this new prospect? What we have to do is educate our citizens so that they are capable of earning a living in this new economy, so as a society we will have to spend more on education and we will have to train people as classes of jobs are destroyed. And it will be a critical mistake in Europe if we sat about constructing legislation to stop AI and robotics. If we have vested interests that say, for example, that we can’t have autonomous vehicles because that will put drivers out of work, then that is a vested interest. Actually, what we should be doing is first of all ensuring that those autonomous vehicles are as safe and productive as possible, [secondly]making sure that we in Europe are capable of building those autonomous vehicles, and thirdly, ensuring that those drivers who lose their jobs, and there will be thousands of them, are offered training to do something else.
“I think we should embrace this technology with a spirit of optimism. As with all new technologies, it has the potential to do good and the potential to do harm…. But economically, I think this has the potential to make all citizens in the world a lot wealthier, and it will make our world a better place. ”
Is data neutral? The ethical implications of treating data as fact
Catelijne Muller, Chair of the study group on artificial intelligence, said that the main challenge of Artificial Intelligence and robotics is “Not about the challenges such as AI becoming too smart and taking over the world, but rather the kind of stupid AI that is already taking over the world.”
Providing an example, she said: “In 2014, a girl was arrested in the United States. She was taken to a police station and there she was ranked by an algorithm, which flagged a high risk of recidivism, so she was not let out on bail and was put in jail for three days. She was black. She had no [criminal] record, and never after that did she ever commit a crime again. At about the same time, a guy was arrested. He was a seasoned criminal, he was arrested for shoplifting…and at the police station he was flagged as a low risk to commit a crime. After that he commit several crimes. He was white.”
Muller commented that the algorithm turned out to be low quality. She said: “There is a misconception about technology being neutral, about data being facts. Data are not facts, data are not neutral. Data can be messy; data can have biases in them.”
What about the fact that this data is used increasingly by systems to make decisions about citizens lives? Muller asserts that a system must be high quality, and that this is perhaps the biggest challenge. “Let’s face it,” she concluded, “if you’re going to build a tool that’s going to potentially send someone to prison, then it is essential to do everything you can and use experts to give the system a better judgement.”