Excitement over capabilities of digital technology must not drown out fundamental questions about relations between a man and intelligent machines. The stakes are high – our autonomy and humanity. ‘Do we have a chance to create a technology that would be really friendly to us?’ asked experts at World Summit AI in Amsterdam

Ten tracks, over 260 speakers, numerous panels and workshops. Presentations of state-of-the-art technologies developed by both renowned producers and success hungry start ups. Six thousand attendees from 162 countries participated in the two-day third edition of World Summit AI in Amsterdam – an international conference dedicated to artificial intelligence. Although techno-optimism was prevalent, some warned against being overenthusiastic about the future.

Shallowness of deep learning

Gary Marcus, an entrepreneur and scientist specializing in cognitivism, claimed that going into raptures over possibilities of artificial intelligence is mostly the result of marketing and has little to do with objective description of reality.

He convinced that the depth in neural networks pertains first of all to the number of layers and not to in-depth overview of the world.

Deep learning is most effective when we have big amounts of data at our disposal. Things are not so bright if data is scarce. This is because, as far as machine learning is concerned, we do not deal with true understanding. It is still statistics and not human-like perception of the world.

Are you scared of robots attacking us? If so, all you have to do is to hide behind a toaster or dress up as a bus. They will not rise to the challenge

Gary Marcus

Today’s AI is moderately intelligent and uses correlations rather than true understanding.

Marcus explained that true understanding is connected with natural overview of the reality. This forms the basis for our making sense of the world. People try to understand the world in terms of spatiotemporal and causal relations. With that framework, our minds become flexible and learn how to solve problems. However, machine learning is about labeling objects.

“Are you scared of robots attacking us?” asked Marcus sarcastically. “If so, all you have to do is to hide behind a toaster or dress up as a bus. They will not rise to the challenge.”

Artificial intelligence is like a corporation

The speech given by Stuart Russell from the University of California, Berkeley was much less humorous. He believes that even if the recent achievements of AI or robotics leave much to be desired, it does not mean that they may not be dangerous. The problem is not about whether machines will become aware of what they do or not, but about whose goals they will be trying to achieve.

Russell referred to the film Slaughterbots which was made two years ago and which warned against consequences of using a deadly autonomous weapon.

He said that miniature drones equipped with a face recognition system and capable of detonating explosives constitute a threat that is much more realistic than the attack of terminators.

We need to design AI that does not know the objective of its actions and that requires contact with a human to gain information about his preferences – believes Stuart Russell

Fitted with special motion detectors, those devices will be impossible to be captured by a man and, operating in a group consisting of several dozens or hundreds of units, they are guaranteed to be more effective than any task force.

Deadly drones is an extreme example of autonomous tools. However, any other system which, in some aspects, is more intelligent than humans, may prove hard to control. And it does not have to be a digital machine.

Any example? “You can think of corporations themselves as if they were AI systems,” convinced Russell. Managed through optimized processes (being in fact algorithms), whose objective is to maximize profits, they operate seamlessly and efficiently although their goals are often contrary to the ones of humanity. And despite the fact that they have also been created by men, the logic they follow is sometimes deadly for us. This is why we should build AI systems in a way making it possible to adapt them to people (human compatible AI).

How can this be done? Russel replies that it can be achieved by designing AI that does not know the objective of its actions and that requires contact with a human to gain information about his preferences. It is not an easy task because, when it comes to many of their actions, people are not rational; besides, there are 8 billion people around the world and each of us has slightly different expectations about life. Regardless, it is AI that should adapt to humans, not the other way around.

A pigeon dance in front of AI

An entirely different perspective was adopted by John Danaher, PhD from the University in Galway, Ireland. In his opinion, man’s limitations, such as irrationality, prejudice or excessive self confidence, hinder development of artificial intelligence controlled by humans. AI systems are more likely to operate more efficiently if we allow them to stay autonomous and if we content ourselves with illusory control.

Without thorough understanding of how machines around us work, we will deceive ourselves that we are somehow able to explain those mechanisms. But in reality, we will be similar to pigeons used in the experiment of behaviorist B. F. Skinner which, after getting used to specific rules of feeding and then being confronted with a change of such rules to completely random ones, took absurd actions (e.g. flapping their wings or dancing), which, in a way, ritualized the behavior in that incomprehensible situation.

Danaher predicts that people’s reactions to cooperation with unclear AI will be similar. He claims that today it is not so much about AI hidden in black boxes as about us isolated in our small worlds and surrounded by omnipresent AI. Is it not true that, asking about creditworthiness evaluated by a machine, neither bank employees nor customers know what actually happens in the AI system? Do Facebook employees really know how the recommendation mechanism works in their social network?

Without thorough understanding of how machines around us work, we will deceive ourselves that we are somehow able to explain those mechanisms – says John Danaher

According to Danaher such a situation will lead to growing techno-superstitions. Lack of understanding how technologies around us work, illusion of control over them, loss of autonomy, as well as erosion of achievements and a sense of our own agency are the main factors contributing to that state of affairs.

What should be done in that situation? Danaher thinks that there are two possibilities.

The first one is to develop AI systems by adding expansions whose purpose would be to explain operation of artificial intelligence and to hope that, in consequence, it will become more understandable to us (which in practice may prove to be an illusion and a pigeon dance mentioned above).

The other one is to fully rely on autonomy of those systems and not to delay their operation with unnecessary mechanisms of human pseudo-control. The Irish philosopher does not see a scenario in which we could, in full awareness, use transparent technologies.

The speeches referred to above coincided with the launch of new books of all three experts. Inquiring readers may decide for themselves which argumentation suits them better – Gary Marcus’s skepticism, Stuart Russell’s prudence or John Danaher’s pessimism. One thing is sure: we all should be suspicious about ingenuous belief in progress resulting only from development of digital technologies.


The World Summit AI conference was held on 9-10 October 2019 in Amsterdam.

Sztucznainteligencja.org.pl was a media partner for World Summit AI 2019.


Przeczytaj polską wersję tego tekstu TUTAJ

Skip to content