In the future we all will be part-robots. My predictions are that in a thousand years we will [physically] disappear from this world – says Richard van Hooijdonk in an interview with z Michał Rolecki and Anna Zagórna

Michał Rolecki, Anna Zagórna: Will we see the creation of the Artificial General Intelligence (AGI)? Will it never come into being or rather – should it never come into being?

Richard van Hooijdonk*: I think every form of AI should adapt to our needs. And yes, I think, if it makes us happier, in general, or it makes us healthier. For example, I wear an iWatch and since I have ADHD, my heart rhythm is going pretty fast, I think it’s the best way to manage my health. So, I really think that it’s more efficient that I have AI running my health.

On the other hand, we should protect ourselves. AI could do a lot of things people can’t. Our brains haven’t been updated for the last 3000 years. So, we need AI to move forward, in general, in order to get older, stay healthier, drive safer, produce cheaper.

But if you automate this, there’s a challenge of those algorithms doing things that we just don’t want to have in our society. Like, making certain decisions we don’t appreciate. And we saw in Facebook, they had an algorithm, and, in the end, they didn’t understand the language of the algorithm anymore and they decided to kill it.

Richard van Hooijdonk

So, as long as we cannot understand what AI is doing, we should actually step in, with a set of rules and that’s the reason I talked about ethics. And this is the reason why I talked about auditing. So, if you have a set of algorithms for healthcare, for education, for police work, for agriculture, we should have certified institutions protecting ourselves against the ethical negatives of these algorithms.

And from that perspective, I think general AI will come and will become bigger and faster. But the only conditions for myself, is that we need to audit these algorithms, and we also need to build a foundation of knowledge, that people are able to talk about it.

Let me move onto second question. Is there a danger that applications of artificial intelligence, robots, automation of work and industry, will divide the society into the poor and the handful of ultra-rich people, like slaves and billionaires? Aren’t we slowly going into this direction?

I don’t fully agree. If you talk about the ‘have’s’ and ‘have not’s’, I think we need to look ahead. I honestly think that if you talk about developers of AI being part of the billionaire group and then the users and people who don’t have too much money and are not able to afford the use of AI, compare it with Google and Facebook, they want to connect whole Africa to the internet. And that means that every African person is able to follow an NBA training for 25 dollars and have a 5-euro smartphone, these very cheap phones.

So, I really think that being able to access AI will give everyone in the world an advantage. The only thing we need to agree on, is how to democratize the outcome of algorithms, and that is actually what you’re mentioning. Democratizing the outcome of algorithms. And this is where the ethics, the agreements and the audits come. Governments, European governments, Asian governments, global governments should step up with a set of rules, where those algorithms should apply to.

But in order to do that, we need a form of education, we need knowledge, awareness. And a lot companies like Amazon, Microsoft, they spend less time on really global awareness. I think algorithm makers, data companies, they make money because of algorithms. But actually, governments don’t aware the citizen.

So what should we do to be safe?

So, if you want to build a safe society, you make people aware, you need awareness, education and then you have rule setting and audits. And that chain is actually not built and that’s dangerous. I really agree, now the billionaires use algorithms to steer what you buy. It’s clear that everything that happens on Facebook and Instagram is such monetary, that people are steered to buy specific products and they’re unaware. And because people are unaware, there is an unequal situation, which is dangerous.

Being able to access AI will give everyone in the world an advantage. The only thing we need to agree on, is how to democratize the outcome of algorithms

So, I agree, but then we should step up and governments should step up and build knowledge and build regulations, set of ethical rules. And I see technology companies doing that. Companies like IBM, also SAS, they are building a set of rules and bring those rules to their customers, but it’s getting too slow.

Last year’s AI at work report from Oracle and Future Workplace concluded that 64 percent of people would trust a robot manager rather than a human supervisor. How far should we go in handing over the power to the AI?

Everything has to do with safety and security. I don’t trust any algorithm at this very moment, I just don’t trust them. Because there are unlimited sources, there is unlimited amount of data schemes, and there’s unlimited way of manipulating the data. So, I’m actually looking for an algorithm that secures my personal data. And I hope not it’s brought by Amazon or Google actually.

But I think the basic foundation here is, people believe algorithms more than their manager, because their managers take other decisions every day. It’s about emotions and people don’t like to have emotions when it’s a business stuff. You need to be rational and artificial intelligence is more rational than people, so they want to cut out the emotions which is understandable.

But in the end, it’s about trust and security. I don’t want my car to kill me. We could easily take over 20 cars and let them attack one target, which is from cyber crime perspective really possible. So, we need to have a reliable trust for network of ecosystems that manage and audit these algorithms. I don’t know how that looks like, if a machine audits an algorithm, it’s a machine auditing a machine. So, we should step in into this whole concept with something we understand. I think that’s not in place, it’s really dangerous. But we first need a lot of accidents before.

Does the widespread use of facial recognition technology in your opinion jeopardize our freedom to right and privacy?

Since 9/11 society got a lot more flexible on data and cameras, right? Because there’s a higher source, that we are able to find terrorists and thieves and get the out of the society in time.

The problem now is that the goal, which is pushing terrorists out of the society, or finding people who don’t pay their taxes, that’s a clear goal, you could decide on. But you can use those systems to catch terrorists and also to see whether you are working or not, to see when you travel for work, and to see when you went to your girlfriend’s, and if you are married. So, the use the data not for the original goal it’s been designed for, that’s a problem.

And in Holland we also have an algorithm that tags our civilians in the wrong way. The algorithm was actually defining possible fraud and that went wrong totally. Because, for example we have a lot of American people in our country, in the end we had a lot of American people pushing fraud, and the algorithm assumed a lot of those people who look like these American people who fraud, they were put on a black list by the algorithm. And there wasn’t any human being watching it. So, we need human intervention, and we need to take care of the whole discrimination part and it brings me again to auditing those algorithms and talk about it. But again, we need to learn, it’s that’s so new.

You have several RFID chips in your body, why did you decide on them? Is it a technology that we all will use in our lives in the future?

By the end of next year there will be another one. I think we now have pacemakers already. We already have technology inside and the last four years I get all kinds of calls from people who say, “you are the one who put chips into bodies!”. All kinds of people who got paranoid around the globe, people think all kinds of creatures put chips in bodies of people. So, the reason why I’m implementing technology, is I don’t want to experience technology.

What do you use the chips for?

I don’t need a passport anymore, I don’t need a bankcard anymore, this is my bank card, don’t need it anymore. My body has access to all kinds of rooms, I think that’s really efficient, I don’t see the harm.

It’s a common thing that we want to stay safe and healthy. For example, my son is suffering from epilepsy, how wonderful would it be to solve that issue for him and for other people.

I don’t need a passport anymore. My body has access to all kinds of rooms. I don’t need a bankcard anymore, this is my bank card

But on the other hand, we need to secure our systems, so is it secure? They were a few people hacking my chips a while ago, they were running behind me, because they had a reader. But in time, we will become merged with robots. When we all get brain chip inside, when your heart is breaking, you will have a really smart pacemaker concept. When you back hurts, you will have artificial muscles, and this again, has to do with ethics, how far we’re going to go. My predictions that we will leave this world in 700 or 1000 years as we came without technology.

The only thing that is interesting in your brains is your knowledge and your memories, that’s all, for the rest you’re not interesting at all. So, we can push it out and let it live for another 500 or 1000 years. The question is, do we want that? Is that ethical? It’s interesting.

In your e-book “The Future of Education” you write that use of technology in education will help to prepare students for future work better. But you also say that education has always been slow to adapt new methodologies. So, what do you think the school will look like let’s say in 10 years in 2030?

Schools are now separated from the business. They are completely outside. Like MIT, there are few universities that work together with companies and organisations. But usually primary education, secondary education, they are isolated from the real world. That means that also educators and professors are also really isolated. I’m also a professor, sometimes I see these really old professors, they don’t have any clue about what is happening in the real world.

So, connecting the dots is crucial. They don’t have enough money, they don’t have time, they don’t have enough knowledge and the future is changing pretty radically and faster than 10 years ago. So, they will stay behind every year more. So, there’s a reason why I opted for a concept like a campus, where young children from 10-15 years old, are part of ecosystems. In US, where they teach children how to use 3D printing, where young children learn how computing is, how robotics is.

For my son, we have few robots at home, and he learns that robot does the thing step by step, step one, step two. Those skills should be taught on a really early age and that’s what’s not happening right now and that’s crucial. I see in China, and in India things are changing from education perspective but in Holland, in Europe, in the US it’s not changing. Education is separated from scientists and I think we should merge it from really early stage. And that will give us a lot of advantage.

The European Union continues to work on future regulations pertaining to artificial intelligence. So, artificial intelligence in Europe is supposed to be trustworthy, ethical and it’s going to serve the higher goal of human welfare, rather than strengthening gross domestic product. By all this and with all this, does Europe still have a change to join the United States and China as a third player in artificial intelligence?

Forget about it. I don’t think so. I visit China and India a lot, there is such an eagerness for change. And even in the US, people are actually a bit spoiled. For example, from quantum computing perspective we lost a race in anyway.

I think we could manage technology the same as the Chinese and Indian do, but their mindset and way that society wants to change is completely different. And this way, the reason why I think that we already lost a race, and if you lose it you should just joint it. So, we should join Alibaba, we should join those Indian companies who change. And stay part of it. Maybe it sounds stupid, but this is how I think about it. What is your opinion as a journalist?

I can’t decide. My personal opinion is that we already lost a race but perhaps we could offer the third way. We could sell ethics and trustworthiness as a product, perhaps?

I think this is time for global collaboration, we need to collaborate. There’re a lot of retailers that fear Alibaba, so let’s join them. Alibaba needs local presence, and we need Alibaba-like concepts, say with Amazon. So, we need to collaborate on global scale, but that’s different. But we have national governments, so that’s an issue.


*Richard van Hooijdonk is a futurologist and trend forecaster. With his team of 15 people they analyse trends such as robotics, drones, autonomous transport, internet of things, and virtual reality.

He was a guest of this year’s Beyond Tomorrow conference 23-25 November. Our website and its publisher National Information Processing Institute – National Research Institute (Ośrodek Przetwarzania Informacji – Państwowy Instytut Badawczy) were partners of the conference.


Przeczytaj polską wersję tego tekstu TUTAJ

Skip to content