China almost scares me. They don’t have any inhibition on gathering of data. They don’t even use IDs, you get your face scanned at a police station. It’s not gonna happen in Europe – says dr. Anne Elster in conversation with Maciej Chojnowski

Maciej Chojnowski: When we use smart tools like Google Assistant or search engine, we mostly don’t realize the technical infrastructure neccessary for them to work. We have a device or an app and they seem so easy. But would AI technology in these smart tools be possible without High Performance Computing (HPC)?

Dr Anne Elster*: That’s a very general question. It depends on how you define AI and how you define HPC, but my general answer would be: no, of course not. Would the language processing you’re doing for speech recognition be possible without HPC? Probably not. The smart tools today using AI have relations to really big data. For speech recognition you need these vast databases of voices to train the systems.

And to process that data you need supercomputers, right?

Yes, or at least high performance computing. I think it’s underestimated how much data AI needs to perform well. I’m talking now about the machine learning aspects. AI is so broad. If you narrow yourself down to machine learning, how to use neural networks (which is another subset) and do supervised learning, then you will really need big data. And that is the most popular and widespread AI application.

Dr. Anne Elster


What you need to remember with AI is categorization. That’s when you really need the big data. Once you’ve done the categories, then it is more like a look-up, but even that is a challenging task. Obviously you’re not gonna put a supercomputer in an autonomous car. If the autonomous car needs to drive, there’s a database behind there, right?


You need supercomputers for the pre-processing that builds the model. In order to recognize what it sees, one uses inteferencing based on the model built from vast databases.

To recognize the traffic situations?

Yes. They’re pre-defined, because it has seen most of them already. Of course there’s always the danger of how an autonomous system will react. Erratic behavior, unusual circumstance, that’s of course always the challenge and AI is to handle the unexpected.

But I think the biggest challenge there is not technical, but ethical. The ethical and judicial dilemmas because it’s actually culture-based what is considered correct behavior. You can imagine a situation, where there’s a car, it has a mother and a child in it and there’s an old lady, some crazy kids and drug addicts crossing the street. If there’s a choice, wich group should the car ram into? Should it hit the people on the street? You have to make some kind of decision. If it’s done by AI, what are the legal ramifications of that? These are the ethical dilemmas that I don’t think are easily solvable.

The biggest difference between Europe and U.S. is that in the U.S. once your data’s out there, you have no right to it. In Europe you can at least say this is my data, so I want you to delete it, to get rid of it from your database

Which one you pick may be even culturally based. So, here in Poland you may have different rules than in the United States or China. Especially China, because it has a different way of looking at culture. I’ve had some interactions with people from Huawei. They don’t seem to understand why people in Norway are so uptight about privacy. In their minds, we should just not care because if you have something to hide you must be a criminal. That’s their mindset. But in Norway we have a different view having gone through the Nazi period.

Another issue is how we view age. In some cultures one would prefer to spare the young over the old since the are viewed as our future, while other cultures value older people more than young people since they are seen to provide invaluable wisdom that young ones do not have. Also, society have already invested in their training.

In the West there’s a different approach to privacy than in the East.

Of course, but our approach is also changing. When I came to the United States in the 1980s I was shocked that my phone bill was itemizing every phone call I made, how long and what number! I thought – what an intrusion of my privacy! In Norway, that was not allowed by the telephone company. And yet today, everyone in Norway accepts it, so our tolerance level has changed. And it’s changing all the time. I’m sure it’s happening in Poland, too.

I think the biggest difference between Europe and U.S. is that in the U.S. once your data’s out there, you have no right to it. In Europe you can at least say this is my data, so I want you to delete it, to get rid of it from your database. However, GDPR is not the solution to all the challenges we are facing.

Speaking of self-driving cars, do you think we can cooperate with autonomous machines? Or maybe we should rather have areas, where only autonomous cars are allowed?

It’s pretty clear that in not too long, if not already, the number of accidents you have if you don’t have humans involved will fall. So that’s another ethical dilemma.

Traffic accidents worldwide are over a milion per year! A million people! This is more than in many wars or big catastrophes. And people don’t realize the most dangerous thing you can do is driving.

It sounds dramatic.

It’s very dramatic. So do we have a moral obligation to fix that? And if autonmous cars are the answer, should we have special lanes? Maybe. The question is who should be protected from whom: people from autonomous cars or the other way round?

In your talk at the Supercomputing Frontiers Europe 2019, you mentioned Dr Genevera Allen and the problem of black box. Is the reproducability problem mainly a result of machine learning or is it a general problem whose only one aspect is connected with machine learning?

There’s a problem with reproducability in any paper in computer science! And it only gets worse because the software stack is so complex. People don’t specify which codes they’re using, doesn’t even show which version of the operating system or which version of the library they’re using. Everything’s changing so quickly, so to actually reproduce a benchmark is almost impossible. People are so hyped-up about benchmarks, but what’s the likelihood anybody can get the exact benchmark you got?

For Europe, it’s important to stand for what we believe. I don’t think we want a society where governments and big corporations control us

This reproducability thing has a whole other aspect. On top of that, machine learning has a problem. You don’t know exactly how any why all the inferences happened because the models were machine-built. And then, if you just add a few things to your data set, or use a different data set, you may get a completely different model and different results. That is unnerving to most of us. You really worry about it.

So now they struggle, trying to prove why it works. Especially in the area of reinforcement learning you see people working on this problem.

You’re active both in Europe and the United States. There’s been a lot of talk about the new global landscape as far as technology is concerned. We’ve two global superpowers – China and the U.S. Europe’s been doing its best to catch up to the two giants – we have Digital Europe, Euro HPC, ethical AI. Different approaches as a sort of competitive advantage. Do you think Europe can come close to US and China digital superpowers?

In the AI space, China almost scares me. They don’t have any inhibition on gathering of data. They don’t even use IDs, you get your face scanned at a police station. It’s not gonna happen in Europe. Well, we may be getting there, with Chinese companies gathering the data on us. That’s the big worry: these companies gathering data on us whether we like it, know it, or not.

Actually, now they’re educating more IT personnel and scientists in China than the rest of the world combined. That should really worry us. Some people will say they’re not as efficient, not as innovative. Will that change? Will their societal structure inhibit them?

U.S. and European social structures are similar. People are worrying about privacy in the U.S. as well. There’s a reason there’s been so many scandals with Facebook, Google, the NSA itself. Even Americans have worries, this isn’t unique to Europe. We’ve more strict laws that help us, there’s no question. But we need to update our policies to reflect how we awant to deal with the impact these large corporations and government entities. Is OK for them to mine your data to the point where they clearly are targeting you and how they want you to behave.

Will the European ethical approach to technology be spread all over the world and other countries will follow?

In China, they don’t even see it. But I think for Europe it’s important to stand for what we believe. I don’t think we want a society where governments and big corporations control us.

The paradox is that if we don’t allow our companies gather data about us, it could impede the AI development. While in China they will collect more and more data on customers and citizens thanks to the governmental support of AI.

We’ll just have to be more innovative. I always like to spin positive. We may have an advantage too, and we shouldn’t give up on that.


Dr. Anne C. Elster is a Professor at the Department of Computer Science (IDI) at Norwegian University of Science & Technology (NTNU) in Trondheim, Norway where she established the IDI/NTNU HPC-Lab, a well-respected research lab in heterogeneous computing. She also holds a current Visiting Scientist position at the University of Texas at Austin, USA.

Her current research interests are in high-performance parallel computing, heterogeneous computing and machine learning for code optimization and image processing. She is a Senior member of IEEE. Funding partners/collaborators include i.a. EU H2020, ARM, NVIDIA, Statoil.


We would like to thank the organizers of Supercomputing Frontiers Europe 2019 for their help in arranging the interview with Dr. Anne Elster.


Przeczytaj polską wersję tego tekstu TUTAJ.

Skip to content