When there is nobody around, with no better alternative, we tend to copy decisions of artificial intelligence, even if it acts stupid. Professor Michał Klichowski, discoverer of this phenomenon, in conversation with Michał Rolecki

Michał Rolecki: The latest issue of “Frontiers in Psychology” has published the results of your research on trust we have in artificial intelligence. Why would a psychologist decide to do research on that matter?

Michał Klichowski*: A psychologist and a pedagogist. The published results are a part of a bigger research project in which I have been trying to verify if classic social psychology theories describing interactions between people also work in the case of interactions between humans and machines. The goal of the first experiment was to check whether the social proof theory, according to which people copy others’ actions in some situations (of danger, uncertainty, insufficient information), is also applicable in the reality where there are no people whose actions might be copied but there are machines.

Does it mean that you wanted to check if we imitated the machines when there was nobody else to imitate?

I guess you can say that. My experiments have proved that it is true. I called this mechanism the “artificial intelligence proof”. Interestingly, the decisions of artificial intelligence in my experiments were absurd and irrational. And yet, they were copied by over 85 percent of participants. The result is extremely interesting as it shows how impactful various fake news could be if followed by “According to artificial intelligence…”. It seems that most people may get heavily influenced by it and do things that AI suggests.

What did your study consist in?

We conducted two experiments: one was done online and the other one in a laboratory. In both cases the participants were to make a very urgent decision in a very critical situation. Without having enough data, they had to decide which of the six persons was a terrorist. Some of the participants were allowed to get the information about the person suspected by artificial intelligence, although the AI’s choice was not only the worst one but also illogical. However, having seen what artificial intelligence did, the majority of participants copied its decision. After the experiment, everyone stressed that they trusted artificial intelligence and had copied its decision because it was smarter and made better decisions than people.

Sometimes people stopped thinking straight and blindly followed the suggestion of the machine. They believed that artificial intelligence couldn’t be wrong

Did people really copy the decision of the machine even though they knew it was absurd?

Yes, at least that’s what most people said in the interview after the experiment. Sometimes they stopped thinking straight and blindly followed the suggestion of the machine. They believed that artificial intelligence couldn’t be wrong.

It reminds me a bit of the famous Milgram experiment. Although the participants knew that their actions were wrong, they obeyed the authority of the researcher (or so they thought) and administered electric shocks to others.

Yes, it’s a similar mechanism called “obedience to authority”. That theory is going to be tested in my next experiment. The experiment we are discussing was about copying an action. But you are right. Many people consider artificial intelligence as an authority. It’s a very disturbing thought.

Car autopilots based on AI algorithms are stepping up their presence in our life, self-driving vehicles are being improved and autonomous military robots armed with weapon are being constructed by the armies all over the world… It’s not wise to put implicit trust in the machines, is it?

No, it’s a big mistake. But we need more education on what machines are capable of. Most of the media present AI as extremely intelligent, often better than humans. That’s why people stop thinking critically. Since no one checks if the results on their calculator are correct, why would anyone bother to verify if AI has made a good decision?

However, those are different realities.

Was it the first experiment that showed how much we trust machine algorithms?

Yes, that’s why it was published in a renowned journal. Some time ago another test was performed on a group of machine operators. It was discovered that people who controlled machines were prone to lose their ability to think about what they were doing and that the more they worked, the less alert they were to machine errors. But that study pertained to a specific group. My experiment involved 1.5 thousand people from 13 countries on three continents. And they were not machine specialists.

What can be done to keep our distance from AI? Hanging a warning sign on a machine wouldn’t help much…

What we need is knowledge. Knowledge about machines’ capabilities, education, and teaching how to think critically. People who participated in my experiments were often very excited about seeing a talking and moving robot.

You created a robot that you called “FI”, which stands for “Fake Intelligence”.

Yes, the article includes several pictures showing what it looks like. In its presence the participants would drop their guard. Coming back to your question, I think we should focus on education.

And if it’s not a robot but only software? Do we also get so excited and trust it implicitly?

That’s another research question and another experiment in the project. However, even the online part has shown that we do, albeit to a lesser extent. A humanoid robot that moves and talks like a human is exciting. But so is a chatbot or a program. Before the pandemic I run a pilot study – without a robot but making it possible for participants to communicate through an application based on artificial intelligence. The results were the same.

Great Britain wants to fine companies which are unable to account for decisions made by their algorithms. Is it a good idea?

It’s a difficult question. I’m not a legal expert. Learning machines that create models of reality or certain situations make decisions in a way that is often difficult to describe, explain or present in the form of an algorithm. There are domains where such artificial intelligence may prove very helpful. We need to make use of the benefits it has to offer. But we also must study the matter and not only develop AI. We have to analyze how AI develops and what are its interactions with people.

Your experiment involved more women than men. Can you tell us if trusting a machine is directly linked to a particular age, gender or education group of people?

Experiments involving volunteers always attract more women than men. All statistical tests have shown that none of the so-called independent variables, e.g. gender, age, education, place of residence, caused any divergence in the studied phenomenon. Obviously, if we tested AI experts, the result would probably be different. We are planning to perform control experiments too, but they will have to wait until the pandemic is over.


*Michał Klichowski, PhD, DSc and Assistant Professor at Adam Mickiewicz University works at the Faculty of Educational Studies, Adam Mickiewicz University in Poznan. He has published 10 books and over 100 papers on relations between technological changes and cognitive and educational functioning of humans. He is the coordinator of the Neuro-MIG international grant under which research is done by over 200 scientists from31 countries. He is an expert of the European Cooperation in Science and Technology, Lusófona University of Humanities and Technologies, and the Polish National Agency for Academic Exchange. He has received many science awards, including the Science Award of the Polish Academy of Sciences, the scholarship of the Minister of Science and Higher Education for exceptional young researchers, and the Scholarship of the City of Poznan for outstanding scientific accomplishments. He has collaborated with numerous research institutions from almost every country in Europe, which has given him extra experience in the domain of science.


Przeczytaj polską wersję tego tekstu TUTAJ

Skip to content