“We must worry about artificial intelligence before it is too late”

iLexx/Getty Images/iStockphoto “We have to worry about artificial intelligence before it’s too late” – Tribune by Pierre Ouzoulias, Emmanuel Maurel and Cédric Villani

iLexx/Getty Images/iStockphoto

“We have to worry about artificial intelligence before it’s too late” – Tribune by Pierre Ouzoulias, Emmanuel Maurel and Cédric Villani

TECHNOLOGY – How about a world in which a newspaper article or a movie script is written entirely by artificial intelligence ? What would you think of a society in which a court decision would be rendered by an algorithm? Of a music channel that would make you hear a new piece composed from A to Z, lyrics, instruments and arrangement, by a neural network? Would you be willing to work at a company that let one machine review your resume and another the right to give you a break?

These examples are not from brave new world or 1984, an essential reference in dystopian literature. They are only a tiny part of what is already made possible by artificial intelligence. They have all already been implemented here and there, giving rise to legitimate debate, in the wake of the famous book by computer scientist Cathy O’Neil: Algorithms, the ticking time bomb.

The question is no longer whether we should approve of these upheavals. It is too late for that, and to reject them would be to forget the more sympathetic sides. The algorithmic, expert in diagnostics based on medical data, has already saved lives. In Singapore, it adjusts traffic lights in real time to clear the way for an ambulance called for a life-threatening emergency. It brings a little more than welcome comfort to farmers. Every day, it gives millions of users the instructions to find a quick route, to meet the right bus stop in an unfamiliar city. It improves electricity consumption at household or community level, helps to optimize the positions of wind turbines, digs into the bowels of large medical databases to develop new treatments.

The question is no longer whether we should approve of these upheavals. It is too late for that, and to reject them would be to forget the more sympathetic sides.

What institutions, what rules to make the right sorting, or the right compromise, regulating while allowing the progress made possible by AI? Work has been done, work remains to be done. Questions related to AI often pass under the political and media radars. No doubt this is due, pell-mell, to the esotericism of the jargon of “tech” and the lobbying of large international companies that present AI as progress to be accepted as a whole, incorporated into solutions disproportionate by their technological complexity, their waste of matter and energy and sometimes the inhuman conditions in which they work under contractors and suppliers. The company Tesla is emblematic of this travesty of progress. Its founder does not hesitate to evoke AI as an existential threat to humanity, well aware of the effectiveness of such a provocation to seduce investors.

Rightly, the impact on work and employment fuels the greatest concerns, as well as the greatest uncertainties. Massive employment of poor workers to consolidate ever larger databases: shouldn’t this new kind of proletariat, if not replaced by an army of robots, be considered as such? The slavery of Amazon employees subject to the orders of algorithms, both in their actions and in the management of their breaks, the “optimizations” schedules that have become perfectly unmanageable and described in O’Neil’s report, finally prove Marx right when he asserted that “time is everything, man is nothing; it is at most a carcass of time. ».

What do we have to say about the philosophical, legal and political implications of artificial intelligence? China is showing us how it can be used for mass surveillance and the annihilation of privacy. In the United States, it has sneakily infiltrated legal proceedings, most often accentuating judgments of a racist nature, as associations have been able to prove.

What do we have to say about the philosophical, legal and political implications of artificial intelligence?

The French government, following the opinion of the report of the Council of State on artificial intelligence, has decided to continue the deployment of a strategy initiated in 2018: evidenced by the provisions provided for in the orientation and programming bill of the Ministry of the Interior (LOPMI), which is currently being discussed in Parliament. This should be an opportunity to ask the same questions again and again: how to establish democratic control in the use of algorithms? How to articulate it with our digital sovereignty? What political meaning should be given to the growing use of new technologies? Where to find the balance between the use of the algorithm and the protection of data? Who is legally responsible for the algorithm in the event of failure?

Time is running out, because the major groups have a considerable lead over the States. Without a reaction on our part, we will be unable to build a humanist and operational policy of artificial intelligence. The European Parliament has already raised the issue, but without the media coverage necessary for a fruitful debate in public opinion.

The purpose of this text is precisely to contribute to opening the discussion for the weeks and months to come. We must, according to the concept of Michel Callon and by the late Bruno Latour, carry out real translation work around these fundamental questions, so that citizens understand this matter, which, one day or another, will end up affecting them. It is essential that our political organisations, trade unions, associations and academics act together so that tomorrow we are all able to meet this immense challenge.

See also on The HuffPost:

“We must worry about artificial intelligence before it is too late”