Beings that are completely different from us are coming and I have reconciled myself with the idea that humanity might disappear, says Grzegorz Lindenberg, a sociologist and publicist, in a conversation with Robert Siewiorek

Robert Siewiorek: Let’s say that your son is now going to college. What faculty would you recommend to him?

Grzegorz Lindenberg*: Whichever he chooses.

Wouldn’t it matter?

It wouldn’t. I would be happy if he could study whatever he is interested in for another 3 to 5 years. Will he be given an opportunity to use his knowledge in the future? I have got no idea. If he was an ingenious mathematician, physician or biologist, I would tell him: Go for maths, physics or biology. But if he was an average student with general skills I would tell him to study whatever he wants.

And you say it wouldn’t make any difference?

It wouldn’t because no one can guarantee that he will be able to earn a living if he does a job he was taught to do. Unless he was an eminent scientist in the field connected with artificial inteligence or biology. However, that scenario pertains only to less than 0.1 percent of all global population.

This year my son is completing his master’s degree in law. How much time do you think he will work as a lawyer?

Everything depends on what is going to focus on. If he works in a law firm and prepares statements of claims or analyses of past and current cases, he won’t be around for long because there already are programs that can do it for him. On the other hand, if he becomes a judge or a barrister, he might do that job for about 20 years, although he will probably be assisted by artificial intelligence, which will give him different hints and prompts on various issues.

It is believed that artificial intelligence will take our jobs. Does our world, where billions of people are jobless just because machines are able to do their tasks better and cheaper, have a chance to survive? The basic income will not solve the problem of having too much free time if people don’ t know how to spend it.

I guess there are many ways people can learn what to do with their free time. Some of them have already learned it. The bigger problem is young people, mostly men, who don’t spend their time working but playing computer games.

And are the cause of breakdown of social relations…

Yes. In Japan and in the United States you can already see large groups of young men who do not work, live with their parents and subsist on whatever they get either from them or from social welfare. We have to spell it out: development of artificial intelligence does not only mean development of new creativity or cognition opportunities. It also means development of entertainment, games and virtual reality.

To create substitutive worlds?

Worlds that will be increasingly convincing and increasingly similar to the real world and, later, that will be more interesting and more absorbing than the real world. If there are millions of such men today, there will be tens or maybe hundreds of millions of them in the future. As the new situation may evolve, we will have more time for ourselves but we will also be manipulated and dragged into entertainment more often than in the past. Living a reasonable life in a world where you don’t have to or can’t work will therefore become one of the most important skills to acquire.

Will there be one or more instances of strong artificial intelligence?

The first strong AI (artificial general intelligence) to be created will probably be the only one, but nobody knows for sure. Everything will depend on how fast it will be developing and to what world it will lead us.
Grzegorz Lindenberg

Grzegorz Lindenberg

That sort of artificial intelligence might be compared to an extraterrestrial, except that it will be created by humans. It will be different from us, so we won’t be able to understand it. It will not be a human; it will be stripped of all biological features that are so important to us.

The most extraordinary thing is that all this is just around the corner and that we will experience a new reality within the next several years or decades

Will there be several instances of strong AI? I don’t know. I would say there will be one only. What will it do then? Well, that’s the area to be explored. Let us get back to the comparison of AI to extraterrestrials. Whenever we think about aliens, we usually assume they are more intelligent than we are, meaning their IQ is about 300, with ours not exceeding 180. The IQ of artificial general intelligence may be 1,000 today, 10,000 tomorrow and 25,000 or even more the day after. Strong AI, if we create it, will self-improve not linearly but exponentially. Having that ability, it will become something that can freely manipulate the reality.

There has been much discussion about how to model AI so that it takes account of our interests and does not harm us when it becomes smarter than we are. Isn’t it wishful thinking? It is hard to imagine a being that is smart and intelligent enough to make its chain of reasoning incomprehensible to us but that, for some reason, still looks to us. However, there are situations when the conflict of our and AI’s interests is inevitable, e.g. access to resources.

That’ s true, but the fight over the resources is understandable from our point of view. But strong artificial intelligence may have a completely different perspective that we won’t be able spot. It doesn’t have to be the fight over the resources. It can be something completely different, like aesthetical reasons. Maybe the humankind will not be physically pleasing to AI, which will be the cause to wipe us out.

Or it will find that we are inefficient.

That may be the case. Yet, I believe that it is worth discovering ways to make artificial intelligence friendlier to humans. Without it, we would be similar to people who don’t want to get insured despite being aware of the approaching hurricane. We must give ourselves a chance.

Will artificial general intelligence get rid of us one day? We will never be sure that at some point it will decide to do so. Our past experiences give us all reasons to fear it may happen. Since the beginning of humankind, people have been a species able to overcome limitations imposed by nature. The way we live today results from the fact that we were dissatisfied with how biology predisposed us. We decided to leave the savannas, to stop hunting and gathering, and we turned to agriculture. Today, we don’t want to wait for a disease to pass away – we want to take efforts to cure it.

And we will teach that modus operandi to a being that is not biologically constrained.


And after we will tell it “You are free to improve”, it will start changing. Will it be possible to permanently fit AI with mechanisms ordering it to treat us as friends? Will they work in the case of artificial intelligence with IQ 30,000? We don’t know. In my opinion we will come back to theological discussions and ask ourselves what a god we have created can do and whether it is subject to any limitations.

In your book “Bettered humanity” you refer to the view of Judea Pearl, who claims that machines must be first taught cause and effect reasoning before they could become really intelligent. How can you be sure that this is the way they will think? Or maybe they will discover a new way of reasoning which we won’t be able to understand?

It is clear that they can discover something completely different from what we would think of. If artificial intelligence is finally capable of thinking in case and effect categories, then its way of thinking and our way of thinking will be very similar. However, that way of reasoning will not be the only one it will have.

But then we will lose contact with it.

But we can’t understand artificial intelligence even today. We don’t have contact with it even though, generally speaking, it is more stupid than we are. As a matter of fact, we don’t understand the processes artificial intelligence chooses to follow.

Two years ago Facebook had to shut down one of its systems because its chatbots started to communicate in their own language, which was impossible to understand for humans, and to object to internal codes.

We already knew that artificial intelligence would develop similar solutions to save time and energy. We know that AI does that to improve efficiency. How AI does that and how it reaches various conclusions which prove to be correct in practice is another matter. We can’t explain that. This is why a separate research area focused on explainable artificial intelligence has already emerged.

It has been created to help us understand what AI does.

Yes, because we want to have cognition control over what it does. Actually, we want any control we can get. In the Polish banking system, if a loan is not extended to a customer, they have the right to ask for the reason of the decision. Artificial intelligence may refuse to provide us with such clarifications as it simply won’t be able to do it.

To some extent, we should understand why AI does something and how it makes decisions. We should, because its effects will be more and more far-reaching and widespread. One of the characteristics of any type of intelligence, be it human or artificial, is making mistakes. The more complicated a situation is, the bigger the probability of a mistake.

We create a world beyond our imagination, because we cannot imagine beings that are more intelligent than us and that do not share our emotions

Trying to understand how artificial intelligence works, we want to avoid making mistakes. We want to be sure it works and thinks correctly. It’s a bit like having a genius that does different things intuitively and that is always perfect. We are happy, but at the same time we feel uneasy. Although the genius has done those things flawlessly 50 times, there is always a risk that on its 51st attempt it will lead us up the garden path.

Do you think that at some point artificial intelligence might become conscious? How will we know if that happens?

I believe it will happen because the probability is very high. Of course, in that respect, the Turing test is not reliable, one of the reasons being a certain contradiction: on the one hand, we would like artificial intelligence not to be able to deceive us; but on the other hand, if it is not able to do so, the Turing test will immediately make us realize that we deal with a machine pretending to be human and not with a human. In other words, we will discover that it is not artificial general intelligence.

I think that the moment when we will be dealing with conscious artificial general intelligence will be the moment when talking to it will be no different in any respect from a normal conversation with a human being.

Will strong AI have any emotions? Will they be similar to ours?

If so, its emotions will probably be different from ours, although we will surely try to model them. The first emotion to be acquired by AI may include curiosity. If artificial intelligence is stripped of curiosity, it will lose its ability to discover the world unless we will constantly reward it for its willingness to do so. However, rewarding will mean, in a way, instilling AI with curiosity…
AI may have its emotions but they will be different from ours. Our emotions are connected with our body. I think that AI may have something that might be compared to fear or joy…

Fear connected with the self-preservation instinct?

Yes. The question is if we will be willing to help AI to acquire such emotions. But even if we are, it will not become a human. We may never create that being, but we are still working on it. We have been creating a being different from us and we have been trying to understand it as humans, i.e. by referring to our emotions.

Since it won’t be a human, what will the being be like?

Let’s imagine that all things we are dependent on disappear in an instant. Let’s imagine we don’t have to eat and drink. Let’s imagine we are completely not interested in sex. Being uninterested in sex alone results in the fact that we are not attracted by the beauty of the opposite gender and abandon a great deal of relations with people. It’s not about becoming psychopaths. It’s about getting rid of many needs we have. And that is only half a step towards artificial general intelligence.

Researchers from the University of Hanover are working on an artificial nervous system that would allow robots to feel pain. Fear of pain causes living beings to take care of their bodies. For that reason, it is now believed that robots would break down less often and take better care of their physical form if they knew what pain was. How are we to treat thinking machines in the future if we teach them to suffer? In such a situation, treating them as objects would be inhumane.

These are the problems that cannot be answered yet. Artificial intelligence – whether taught to suffer or not – will require a completely new approach if it becomes conscious the way we are. Let me give you another example. We have conscious artificial intelligence that exists in a computer. If we decided to destroy that computer or to delete AI, it would be like killing a human being, wouldn’t it? To me, the question of whether AI will be able to suffer or not is a secondary concern. If it is able to think and to understand itself and the reality around it, it will be a being, an entity. To be honest, all this looks a bit scary to me…

Aren’t you afraid of the new world to come?

I am, to some extent. The new world will be entirely different from the one we know today. We will be more shocked than a Pygmy teleported from the African jungle to the heart of Manhattan. And that makes me scared. Beings that are completely different from us are coming and I have reconciled myself with the idea that humanity might disappear. I just can’t believe that our brains will get connected to computers and that we will become cyborgs.

Are you referring to the Kurzweil idea?

Yes. Firstly, it is so complicated that it will not happen in the nearest future. Secondly, I’m not quite sure if I would like my consciousness to be locked in a computer. Despite all the problems I may have with my body, I have to say I got used to it.

On the other hand, I agree with what Max Tegmark wrote in his book “Life 3.0”: if we want to explore space, we must forget about rocket and start thinking about the digital form. Only then will we be able to explore the entire universe.

So I think that perhaps we, as a species, are indeed some kind of intermediate stage, preceding the appearance of beings that are more intelligent better versions of ourselves. It may be they will replace us.

Living a reasonable life in a world where you don’t have to or can’t work will become one of the most important skills to acquire

The most extraordinary thing to me is that all this is just around the corner and that we will experience a new reality within the next several years or decades. Most of people who are reading this interview will soon find out if what we are talking about has made sense or not.

Let us hope that our readers will not conclude that we entirely lacked imagination and that the world they live in can by no means be compared to our cautious and reserved forecasts.

It makes me think of a standard way of presenting differences between a two- and three-dimensional world: a frilled shark living in a two-dimensional world cannot imagine a world inhabited by birds.

As a matter of fact, I got the impression that we are creating a new dimension of intelligence, a new dimension of thinking. We are creating a world beyond our imagination.

Why?

Because we are unable to imagine beings that would be more intelligent than us and that wouldn’t have emotions similar to ours. That is why we anthropomorphize animals and other beings like artificial intelligence. But it is all unnatural. As you said, our descendants will probably laugh at our naive vision of the future.

Just as we are laughing at the visions futurologists had 30 years ago.

Let us go back to that world: it wasn’t changing on a daily basis and yet today we find it completely strange, don’t we? For young people, a world without smartphones, internet and computers would be a nightmare. Well, how would you communicate in such a world? Where would you look for the information? In the TV news broadcast once a day? I still remember backdated newspapers that contained news not from yesterday but from two days before because it was impossible to distribute them as fast as one would like to. Those were the times when it was OK not to know and not to be in touch. The world that existed 30 years ago looks extremely odd. But the world that will come in 30 years will differ from the one that we live in now even more.

Will the world slip away?

Maybe at some point. If artificial intelligence is able to change itself and the world faster than we can understand it, then the world will slip away. And if that is inevitable, I would love to see it.


*Grzegorz Lindenberga PhD in social sciences, editor, publicist, member of the management board of the European Issues Institute foundation. From 1985 to 1987 he was a researcher at Harvard University and a lecturer at Boston University. He authored several books, including „Ludzkość poprawiona. Jak najbliższe lata zmienią świat, w którym żyjemy” (2018) and „Wzbierająca fala. Europa wobec eksplozji demograficznej w Afryce” (2019).

Grzegorz Lindenberg was a guest at the 9th edition of the European Forum for New Ideas held in Sopot between 25 and 27 September.

Our portal sztucznainteligencja.org.pl was one of the media partners of the event.

Skip to content