Many people say we don’t understand consciousness. Let them speak for themselves. I believe we’re getting to know it better and better. We’re able to use our knowledge about the brain to build better artificial intelligence step-by-step. Maciej Chojnowski talks to Professor Włodzisław Duch

Maciej Chojnowski: Gnothi seauton, know thyself – a maxim at the Temple of Apollo in Delphi proclaimed. Self-awareness, self-knowledge is one of the cornerstones on which a significant part of the European philosophy was founded. And how does consciousness look from the perspective of a neurocognitivist?

Prof. Włodzisław Duch*: Depends on who you ask (laughter). Everyone has their favorite theory. I believe John Locke was right and consciousness is the perception of what happens in our mind. And because a lot of things are happening in it, then only certain things are clearly indistinguishable from the background noise. What’s more, only a small fraction is important in deciding what we will do, how we will behave, where we will go. Yet, we strain our muscles, breathe and digest. Thousands of things are happening inside us, but we don’t realize it.

Yet, some brain agitation is so powerful that we notice it. When we use words, for example, then we express what we see in our minds. An activation takes place which causes an idea to become a part of a developing sequence. And we’re able to say this sequence, meaning the sentence – although in some cases we don’t know what we’re about to say. We also can’t plan what we’re going to say in an hour. It happens spontaneously, depending on the situation.

We also have an illusion that the mind is something immaterial, because we can’t see the mind, just our actions. A Japanese zen teacher Shunryu Suzuki wrote that there’s a little mind and a big mind. The big mind is a human being as a whole, it encompasses everything, while the little mind is my belief that only my ego exists. But even it sometimes behaves spontaneously! Some sort of a process stands behind all this, into which I have no insight. I can only observe my behavior.

I sometimes play on the electronic wind instrument. Since I don’t have a musical imagination a question arises: how can I compose something or play something? I don’t know. My brain is able to construct melodic lines, it taught itself some things, while I’m not able to imagine a melodic line.

We remain a secret to ourselves?

In the book, ‘’Perplexities of Consciousness,’’ Eric Schwitzgebel gives an example of blind people who claimed to have felt a breath of air waft over their face, realizing they are approaching something. It turned out that when masks were put on them, they were still aware of it – but they stopped when their ears were plugged. Conclusion? It was echolocation, these people were unable to ascertain they were using the sense of hearing for orientation in space.They thought it’s thanks to the sense of touch. So, if we’re not properly trained, we can’t even interpret the inner transfers of information well.

People came up with different computational architectures which more or less resemble brain activity. Many attempts are successful, even though nobody has done it well in detail

Another case was described by Oliver Sacks in a book, ‘’Anthropology on Mars.’’ As a result of nerve inflammation, a certain woman didn’t receive sensory signals from her body (proprioceptive). She didn’t know where her body was because of it, she couldn’t move, because the lack of signals means you don’t know how to move your muscles. All it took was her looking at her reflection for a moment for this information to find itself in her brain.

How is it possible?

The visual system is so constructed that the information on spatial relations finds its way to the parietal lobe. It adheres to somatosensory cortex in which the sensory representation appears. Based on that, visual information percolates through to the sense of touch. And that’s enough.

Experiments we do sometimes with a rubber hand show it wonderfully. A person who doesn’t see their hand but sees that the rubber hand is being irritated, while seeing the invisible hand at the same time starts to get the feeling it’s their own limb.

Certain processes occuring in the brain are also responsible for activating the areas which discern what is the internal state of the brain and if a particular idea and meaning can be ascribed to it. Ascribing meaning helps us in the creation of sequential mental states, that is reasoning. It seems that the invention of the language was extremely important from this perspective.

We can feel and sense various things without language, some we can even imagine non-verbally, such as music, and externalize it. I observe my cats and I see when they spot a bird, they know to run in a particular direction, take a different path, to outrun it and run after the bird. So the imagination occurs without language too – except it’s non-verbal and we won’t build logic on it. A cat won’t start playing around with philosophy.

Prof. Włodzisław Duch

Many people say they don’t understand consciousness. Let them speak for themselves. I believe we’re getting to know it better and better. What’s more, our understanding pertains to such topics as distortions of awareness in people who as a result of a stroke or other problems are unconscious and we need to wake them up. In order to do that, we need to cause the re-emergence of complex processes in the brain. Incite neurons to coopare with themselves and synchronize their actions. To do this, electrodes for deep stimulation are implanted on the level of thalamus or the brain stem where the reticular formation is located. It sends its projections to the rest of the brain to fully stimulate it. It’s a bit like with a car engine that needs to be warmed up to drive off quickly.

But we still don’t understand everything.

We don’t understand a lot of the details, but we understand the big philosophical questions.

A different matter is how much of this we can transfer onto the ‘’brain-like’’ computer architecture, such as BICA (Brain Inspired Cognitive Architectures). People came up with different architectures which more or less resemble the operation of the brain. Many of these attempts are quite successful, even though nobody has done it well in detail.

Theoretically, the Human Brain Project was supposed to lead to the creation of such a simulator. But there are no signs it has worked yet. The creators of HP have done great work because they catalogue all the neuron types, the entire biochemistry. Is it necessary? We don’t know that.It can turn out just like with the plans – that we don’t have to construct flapping wings to fly. We need to know the basic rules that will let us do what works similarily. We can then work even faster, more efficiently.

It can turn out that the brain simulation is just like with the planes – we don’t need to construct flapping wings to fly. We need to know the basic rules which will help us create something which works similarily

Even these kinds of inspirations are really fecund. It can particularly be seen in the case of neural networks and deep learning. It’s a general, but really useful, inspiration because thanks to it things like image analysis can be done. Step-by-step, we’re able to use the knowledge about the brain to build a better artificial intelligence.

Neuromorphic computers are created as part of the Human Brain Project. The computers are trying to recreate the way we discover the world thanks to the embodied brain. The matter of energy efficiency is also important in these machines. Is this approach necessary to think about the real artificial intelligence?

A neuromorphic approach is really important and an increasing number of energy-saving devices are being created. Artificial intelligence experts aren’t particularly interested in these circuits, but it’s an interesting direction.

If we want to have a robot, on the other hand which will be working relatively well in the real world, then it would be good if it could gain various abilities by interacting with this world: controlling the body, reacting to the environment and so on.

A linkage with the world forms our sensorimotor programs. They aren’t sentient but they are able to use them to do different things on a higher level. The human brain does a lot of things this way. We learn to walk smoothly, hear speech as a whole and the different sounds. We learn it all spontaneously – and such unsupervised, spontaneous learning is the basis.

There’s also a matter of what we want to teach a machine. If we want to teach gaming. Or controlling a fighter squadron, then we won’t need the embodiment. If we want to have a robot that behaves operationally, then it will be necessary anymore.

And when we’re going to have a robot which can learn this, the transfer of information to a different robot will be instantaneous. If one computer learns something, then the rest will immediately know how to do it too. It’s an incredible thing. We learn from other peple, but slowly, with difficulty – and here, it will turn out that we have millions of robots, each learning extremely fast, because they all learn from each other. The consequences of what could happen are difficult to imagine.

My attention was directed to the opinions of two professors – the neuroscientist, Gary Marcus, and the philosopher Luciano Floridi. Both believe that the modern machines aren’t intelligent, because they don’t understand the cause-and-effect. In your opinion, can we talk about artificial intelligence today?

Again, it’s a matter of definition. I define intelligence in a simple way, because we need clear definitions in computer science. If we can create an algorithm which is effective, then we simply program things, as in accounting, for example. If, on the other hand, we want to solve a problem for which such an effective algorithm doesn’t exist, then the solution will require a degree of intelligence. And artificial intelligence tries to solve problems for which there are no effective algorithms. It’s a simple definition.

Let’s take a look at the travelling salesman and bin-packing problem. For example, we want to come up with a train schedule to optimize the carriage use, hasten connections, and so on. These are difficult problems and considering all the options, on a greater scale, to find the optimal one is impossible. The complexity of the problem grows so fast, that no computer can handle it. Maybe a quantum computer could in the future, but not every time, too.

There are no effective algorithms for major problems because of it. We need to look for shortcuts. Is this intelligence? When intelligent people plan, they also use various tricks and shortctuts, right?

But when Gary Marcus says that what we do today isn’t intelligence yet, there’s some truth to it. For example, the Watson model created by IBM doesn’t have a cause-and-effect model which makes its understanding of medicine rather poor. There are specialized programs that have such capabilities but they pertain to specialized problems.

Kai-FU Lee wants to convince us, in turn, that an era of artificial intelligence for ‘’do-it-yourselfers’’ is upon us. There’s no need for genius discoveries, because they happen once in a fifty years. Sure, something can always alter the balance of power, but we more or less know what to do for now, and money is invested in implementation. In all AI strategies that have been drawn up in Europe, everyone puts their money on implementation. Many of these strategies don’t mention science at all.

Some say we are witnessing an arms race in this area. Stuart Russell warns against the development of lethal autonomous weapons, like drones, for example, that could set off micro charges. Others want to scare us with super-intelligence. But you can also think about it differently. It’s enough to have an ecosystem of supposedly innocent smart tools using the Internet of things that will become tiresome without proper safeguards It’s sometimes jokingly depicted as the rebellion of toasters or refrigerators. Are you concerned it can happen?

There are many things we should be weary of. One of them is the enormous capital concentration. As Lee writes, there are currently seven huge corporations that decide on many things in the world, they’re able to break down the economic sysyems, for example. This concentration gets more pronounced, because a few people control massive areas. WhatsApp was sold for 19 billion dollars. A simple communicator sold by a company that had fifty employees at one point! During the Moon Landing era, space agencies employed hundreds of thousands of people, today fifty make billion-dollar deals. Unbelievable!

Of course, the most concerning thing revolves around military applications. If we have a system which sees where the planes and military vehicles are, what’s happening on a battlefield, then the machine is able to better guide such processes than a general. I’m able to imagine that, if there’s a robo-cybernetic war, certain countries will lose their moral compass.

We also have artificial intelligence which stands behind political processes. Bots influenced the outcome of the election. And now there are deep fakes as well, and the possibility of influencing groups of people is even higher. And it works – many people get excited about it, although not many people check who’s behind all this. I’m afraid ignorance and arrogance will win and artificial intelligence will have a part in it.

There are great possibilities, on the other hand. Take medicine as an example, the creation of environments which will help the human development. It’s possible to determine, for example, if dyslexia will develop in a child, and if the child needs a special stimulation in the early stages. This doesn’t surpass our abilities.

What are you working on with your teams in Toruń?

I founded a neurocognitive laboratory at the Centre for Modern Interdisciplinary Technologies. Our main project focuses on the monitoring of the development of infants, their phonematic hearing development in particular. We wanted to uproot a dominant hypothesis that a child can learn to differentiate between sounds in a given language if it has contact with a man who speaks the language. In other words, a social contact is needed for it.

I’m able to imagine that, if there’s a robo-cybernetic war, certain countries will lose their moral compass

I came to the conclusion that incentivizing the reward system is necessary, i.e. a promptly received feedback. We wanted to create a system that will monitor the hearing of children and will notice what the child reacts to, because the child reacts to all sounds before its eight month. It starts to lose this ability after the tenth month, and in the twelth month it basically differentiates between various sounds of its native language.

We implemented it as part of the ‘’Symfonia’’ project. They were multi-area, interdisciplinary studies. We got linguists from Poznań involved, who came up with the segmentation of different speech parts for us. We also started collaborating with the specialists from a research center in Kajetany, who manufacture – among other things – hearing aids. It’s been an extraordinarily ambitious, five-year project. It’s currently ending.

We also have interesting, somewhat side, effects. A toy, for example, that will stimulate a child in such a way that it will be possible to determine if the child’s hearing developing well, if it didn’t go deaf, whether we can strengthen the differentiation of sounds in a language other than the mother tongue. This would cause quite a revolution.

Apart from that, it’s also about stimulating curiosity in the children, particularly in the ones a bit older. It was supposed to strengthen their working memory which correlates with intelligence well. It’s not about forcing something, it’s about making a child learn that it pays to remember various things, because the child can get a prize thanks to it. It’d be a revolution.

What was the outcome?

Psychologists started doing experiments and repeating the previous research. The experiments took longer than expected, performing an EEG on such small children turned out to be a nightmare. It was a bit easier with oculometry, but it certainly wasn’t easy. We finally came up with a sight-guided toy: a child anticipates something interesting will pop up and, depending on the sound it expects, the child looks in one or another direction. We’re now diligently writing down the conclusions of these studies.

Our second big project centers on brain fingerprinting which is the study of the brain through signal analysis. We did this project with professor Andrzej Cichocki from the Brain Science Institute in Tokyo. On the basis of analysis, mainly EEG, we want to study the activity of the chosen brain structures. Being able to find information in a signal about the activity of these structures, we can do the so-called neurofeedback, i.e. the strengthening of their activity.

We can also use the direct stimulation of the brain by applying direct current, or transcranial magnetic stimulation (TMS), that is a magnetic field which increases neuroplasticity.It helps to improve the information transfer where it broke down, for example in people who had a stroke or autistic children. We are also thinking about the application of brain stimulation in pain alleviation.

*Professor Włodzisław Ducha graduate of physics at the Nicolaus Coipernicus University (UMK) in Toruń, he did a Ph.D. in quantum chemistry. His research interests initially pertained to computational methods of quantum mechanics, computational physics, the basics of physics and quantum mechanics interpretation, only to shift his interests at the end of the 80s to the theories and application of neural networks, machine learning, artificial intelligence methods, brain operation modeling and neurocognitive technologies. He became a professor in 1997.

He worked multiple times in France, Japan, Canada, Germany, Sigapore, the USA as a visiting professor. From 1991 to 2014, he was the Director of the Faculty of Physics, Astronomy and Informatics at the Nicolaus Copernicus University in Toruń. Since 2013, he has been the Director of the Neurocognitive Laboratory at the Centre for Modern Interdisciplinary Technologies at the Nicolaus Copernicus University. In 2012, he became the Vice-Rector for Scientific Research and Informatization at the Nicolaus Copernicus University. In 2014-2015, he was the Undersecretary of State at the Ministry of Science and Higher Education. A Founding Member of the Polish Neural Network Society (1995), followed by the Polish AI Society (2010). Since the end of the 1990s, he has been involved in establishing cognitive science in Poland, he is a co-founder of the first thematic magazine and the Polish Cognitive Science Society (2001). He was twice appointed the President of the European Neural Network Society (2006-2011). In 2013, he was chosen as a fellow of the International Neural Network Society (INNS) and a Commission on Complex Systems Member at the Polish Academy of Arts and Sciences.

The interview with Professor Włodzisław Duch took place as part of the Brainstorm 4.0 Conference, organized by the Neuropsychology and Psychophysiology Student Club at the University of Warsaw.

Przeczytaj polską wersję tego tekstu TUTAJ

Skip to content