Featured article

Towards a responsible artificial intelligence

Photos of Towards a responsible artificial intelligence


shutterstockFor a long time, we thought the idea came out of science fiction: machines so intelligent that they could perform tasks in place of Humans, even replace them completely. Big budget movies were full of it, from Minority Report (2002) to A.I. Artificial Intelligence (2001), from Blade Runner (1982) to Ghost in the Shell (1995 and 2017). And then, these last few years, predictions became concrete. Artificial intelligence entered the world of business and our daily lives. Raising the question that could become crucial to humanity’s future: in a world marked by machines’ intelligence, how can we guarantee that it will always be in the service of all?


A brief history of AI

AI is developing today because it’s based on a relatively recent phenomenon: the Big Data. NITCs first, then connected objects exploded the amount of data that humanity generates every day. Data that we are learning to collect, analyze and use to improve a service, a product, or even resolve certain environmental or social challenges. One of these possible uses is thus artificial intelligence. Indeed, machines learn by being fed bulks of data, thanks to techniques such as machine learning (simple learning algorithms that allow it to “make sense” of a database), and more and more deep learning (layered learning, which allows computers to perform tasks until now only accessible to human brains, such as recognizing an image). In the end, the computer becomes capable of performing tasks for which it was not programmed. And experts agree that around 2050 artificial general intelligence (AGI) will widespread, which skills will match those of humans in a number of areas.

We will have robots in our houses, in our streets with automated vehicles, but also in train stations, hospitals, and the city in general (…). Our houses, public spaces will become intelligent in order to allow us to increase our security, our health, our productivity.

An AI in the service of humanity…

What this future promises is a machine so intelligent that it could become a real personal assistant, soon indispensable in the world of business as in everyday life.  For Alexandre Alahi, researcher in artificial intelligence labs in Stanford University, quoted by le Parisien: “We will have robots in our houses, in our streets with automated vehicles, but also in train stations, hospitals, and the city in general (…). Our houses, public spaces will become intelligent in order to allow us to increase our security, our health, our productivity.”  AI ​​can already predict the maintenance needs of your boiler or industrial equipment; it allows companies to offer tailor-made services to their clients; it can also do simultaneous translation, help manage traffic flow in the city and even create its own artwork. Tomorrow, thanks to AI, cars will drive themselves, we will go through self-checkout without emptying the trolley, first aid will know the specifications of an accident before even getting to the scene thanks to image recognition, etc. The list is long, potentially infinite.

machine learning


… who then ensures the common good

The announced omnipresence of AI raises important ethical questions. To ensure that it remains in the service of all, it is indispensable to reflect on roboethics now. The term covers on the one hand robots’ creators’ ethics, and on the other hand the moral sense machines will be given. In short, it is a question of ensuring that AI ​​cannot evolve in such a way as to make decisions contrary to the common good. As summarized by the president of NSRC’s ethics committee Jean-Gabriel Ganascia in an interview with Regards sur le Numérique, we must “reflect on the human values we integrate into machines, in order to ensure their actions are not only decided by reprogrammable learning systems and, that in consequence, they become unpredictable.” It’s the paradox of roboethics, since by definition AI must be capable of performing tasks for which it has not been programmed, and which thus we cannot foresee.

This demand raises another question: with what data do we feed the machine? In March 2016, a chatbot launched on Twitter by Microsoft was deactivated after a few hours: learning too well from its interactions with trolls that sought to push boundaries, Tay, after just eight hours, made racist and conspirationist remarks and denied the Holocaust. Let’s imagine an artificial intelligence that has the power to make concrete decisions (interrupt public transportation or electricity supply, for example) and that is also intellectually suggestible: the damages would be devastating. Finally, we cannot talk about ethics without talking about the governance of Big Data: AI relies mainly on data generated by citizens, consumers, customers and then held by public or private actors. It is therefore essential that those who emit the data should know who collects it and for what purpose, that they have a right of control over what it serves. One of the major challenges of the coming years will be to increase society’s “data literacy” as a whole, from classrooms to meeting rooms, to encourage open dialogue on what the data allows. An artificial intelligence in the service of all, hopefully.


More than a 100 experts from around the world will take part in the WFRE from 17 to 19 October to discuss the technological, societal and economic upheavals of our time and present their reflections and good practices.

SAVE THE DATE  17 – 19 October 2017 in Lille