we-are-not-ready-for-manipulative-ai-–-urgent-need-for-action

Chatbots and other human-imitating artificial intelligence (AI) applications are increasingly important in our lives. The possibilities raised by the latest developments are fascinating, but the fact that it is possible does not mean it’s desirable. Given AI’s ethical, legal and social implications, questions on its desirability are increasingly pressing.

Nathalie Smuha is a legal scholar and philosopher at KU Leuven. Mieke De Ketelaere is an engineer at Vlerick Business School. Mark Coeckelbergh is a philosopher at the University of Vienna. Pierre Dewitte is a legal scholar at KU Leuven. Yves Poullet is a legal scholar at the University of Namur. 

We know that AI systems can contain bias, “hallucinate”, or make a statement with great certainty that is wholly disconnected from reality and produce hateful or other problematic language. Their opaqueness and unpredictable evolution exacerbate this issue.

But the recent chatbot-encouraged suicide in Belgium highlights another major concern: the risk of manipulation. While this tragedy illustrates one of the most extreme consequences of this risk, emotional manipulation can also manifest in subtler forms. Once people get the feeling they interact with a ‘subjective’ entity, they build a bond – even unconsciously – that exposes them. This is not an isolated incident. Other users of text-generating AI also described its manipulative effects.

No understanding, nevertheless misleading

Companies that provide such systems easily hide because they don’t know which text their systems generate and point to their many advantages. Problematic consequences are dismissed as anomalies. Teething problems, which will be solved with a few quick technical fixes. Today, numerous problematic chatbots can be accessed without restriction, many of which specifically showcase ‘personality’, increasing the risk of manipulation. 

Most users realise rationally that the chatbot they interact with has no understanding and is just an algorithm that predicts the most plausible combination of words. It is, however, in our human nature to react emotionally to such interactions. This also means that merely obliging companies to indicate “this is an AI system and not a human being” is not a sufficient solution.

Everyone is vulnerable

Some individuals are more susceptible than others to these effects. For instance, children can easily interact with chatbots that first gain their trust and later spew hateful or conspiracy-inspired language and encourage suicide, which is rather alarming. Yet also consider those without a strong social network or who are lonely or depressed – precisely the category which, according to the bots’ creators, can get the most ‘use’ from them. The fact that there is a loneliness pandemic and a lack of timely psychological help only increases the concern. 

It is, however, important to underline that everyone can be susceptible to the manipulative effects of such systems, as the emotional response they elicit occurs automatically, even without even realising it. 

“Human beings, too, can generate problematic text, so what’s the problem” is a frequently heard response. But AI systems function on a much larger scale. And if a human had been communicating with the Belgian victim, we would have classified its actions as incitement to suicide and failure to help a person in need – punishable offences. 

“Move fast and break things” 

How come these AI systems are available without restrictions? The call for regulation is often silenced by the fear that “regulation stands in the way of innovation”. The Silicon Valley motto “move fast and break things” crystallises the idea we should let AI inventors do their thing, for we have no idea yet of AI’s marvelous benefits. 

However, technology can also literally break things – including human lives. A more responsible approach is hence needed. Compare this with other contexts. If a pharmaceutical company wants to market a new drug, it cannot simply claim it does not know what the effect will be but that it is definitely groundbreaking. The developer of a new car will also have to test the product extensively before it can be marketed. Is it so far-fetched to expect the same from AI developers?

As entertaining as chatbots may be, they are more than just a toy and can have very real consequences for their users. The least we can expect from their developers is that they only make them available when sufficient safeguards against harm exist.

New rules: too little, too late

The European Union is negotiating a new regulation with stricter rules for “high-risk” AI. However, the original proposal does not classify chatbots as “high risk”. Their providers must merely inform users that it concerns a chatbot and not a human. A prohibition on manipulation was included, but only when this leads to ‘physical or psychological harm’, which is not easy to prove.

We hope that member states and parliamentarians will strengthen the text during the negotiations and ensure better protection. A robust legislative framework will not stand in the way of innovation but can encourage AI developers to innovate within the framework of our values. Yet we cannot wait for the AI Act, which in the best case, will come into effect in 2025, and hence already risks being too little, too late.

What now?

Therefore, we urgently call for awareness campaigns that better inform people of AI’s risks and demand that AI developers act more responsibly. A shift in mindset is needed to ensure AI’s risks are identified and tackled in advance. Education has an important role to play here, yet there is also a need for more research on AI’s impact on fundamental rights. Finally, we call for a broader public debate on the role we wish to allocate AI in society, both in the short and longer term.

Let us be clear: we, too, are fascinated by AI’s capabilities. But that doesn’t prevent us from also wanting these systems to be human rights-compliant. The responsibility for this lies with AI providers and our governments, who urgently must adopt a sound legal framework with strong safeguards. In the meantime, we ask that all necessary measures be taken to prevent the tragic case of our compatriot from repeating itself. Let this be a wake-up call. The AI playtime is over: it’s time to draw lessons and take responsibility.

Co-signatories:

Ann-Katrien Oimann, philosopher and legal scholar, Royal Military Academy & KU Leuven

Antoinette Rouvroy, legal scholar and philosopher, UNamur

Anton Vedder, philosopher, KU Leuven

Bart Preneel, erngineer, KU Leuven

Benoit Macq, engineer, UCLouvain

Bert Peeters, legal scholar, KU Leuven

Catherine Jasserand, legal scholar, KU Leuven

Catherine Van de Heyning, legal scholar, Universiteit Antwerpen

Charlotte Ducuing, legal scholar, KU Leuven

David Geerts, sociologist, KU Leuven Digital Society Institute

Elise Degrave, legal scholar, UNamur

Francis Wyffels, engineer, UGent

Frank Maet, philosopher, LUCA / KU Leuven

Frederic Heymans, communication scientist, Kenniscentrum Data & Maatschappij

Gaëlle Fruy, legal scholar, Université Saint-Louis – Bruxelles

Geert Crombez, psychologist, UGent

Geert van Calster, legal scholar, KU Leuven / King’s College / Monash University

Geneviève Vanderstichele, legal scholar, University of Oxford

Hans Radder, philosopher, Universiteit van Amsterdam

Heidi Mertes, ethicist, UGent

Ine Van Hoyweghen, sociologist, KU Leuven

Jean-Jacques Quisquater, engineer, UCLouvain

Johan Decruyenaere, doctor, UGent

Joost Vennekens, computer scientist, KU Leuven

Jozefien Vanherpe, legal scholar, KU Leuven

Karianne J. E. Boer, criminologist and legal sociologist, Vrije Universiteit Brussel

Kristof Hoorelbeke, clinical psychologist, Ugent

Laura Drechsler, legal scholar, KU Leuven / Open Universiteit

Laurens Naudts, legal scholar, Universiteit van Amsterdam

Laurent Hublet, entrepreneur and philosopher, Solvay Brussels School

Lode Lauwaert,     philosopher, KU Leuven

Marc Rotenberg, legal scholar, Center for AI and Digital Policy

Marian Verhelst, engineer, KU Leuven and Imec

Martin Meganck, engineer and ethicist, KU Leuven

Massimiliano Simons, philosopher, Maastricht University

Maximilian Rossmann, philosopher et chemical engineer, Maastricht University

Michiel De Proost, philosopher, UGent

Nathanaël Ackerman, engineer, AI4Belgium SPF BOSA

Nele Roekens, legal officer, Unia

Orian Dheu, legal scholar, KU Leuven

Peggy Valcke, legal scholar, KU Leuven

Plixavra Vogiatzoglou, legal scholar, KU Leuven

Ralf De Wolf, communication scientist, UGent

Roger Vergauwen, philosophe, KU Leuven

Rosamunde Van Brakel, criminologist, Vrije Universiteit Brussel

Sally Wyatt, science & technology studies, Maastricht University

Seppe Segers, philosopher, UGent / Universiteit Maastricht

Sigrid Sterckx, ethicist, UGent

Stefan Ramaekers, pedagogue and philosopher, KU Leuven

Stephanie Rossello, legal scholar, KU Leuven

Thierry Léonard, legal scholar, Université Saint-Louis

Thomas Gils, legal scholar, Kenniscentrum Data & Maatschappij

Tijl De Bie, engineer, UGent

Tim Christiaens, philosopher, Tilburg University

Tomas Folens, ethicist, KU Leuven / VIVES

Tsjalling Swierstra, philosopher, Maastricht University

Victoria Hendrickx, legal scholar, KU Leuven

Wim Van Biesen, artist, UGent

Leave a Reply