If making profit remains the only goal of technology development, nothing will change, says Katarzyna Szymielewicz, Ponaptykon Foundation, in conversation with Maciej Chojnowski

Maciej Chojnowski: Amy Webb, a futurologist and author of “The Big Nine”, warned us not so long ago that in 15 years we might wake up in a dreadful world where our identity would be infallibly verified with biometrics and digital technologies. Only the wealthiest would stand a chance to escape total surveillance. Are we going to wind up oppressed by digital totalitarianism?

Katarzyna Szymielewicz*: There is no simple answer to that. In your question you have included several assumptions. Firstly, you mentioned the possibility of infallible biometrical identification of our bodies. Secondly, you equated full identification with taking control of our identity, which doesn’t necessarily have to be the case. Thirdly, you assumed that such control would translate into totalitarianism and, by extension, to political subjugation.

But I would agree with the opinion that in 15 years technology will make it possible to identify our bodies in no time. The EU countries have already included biometric features in passports and IDs. It wasn’t preceded with any open debate although the matter was sufficiently important to have been discussed in public. We just accepted it as a natural order of things. And that’s sad.

But does it mean that we lose control of our digital identity? I don’t think so. The identity is a much more complex construct than relatively simple identification of a person. On the other hand, our freedom in a digital world – freedom to make life, political and consumer choices – is threatened by private companies and marketing specialists using their services. It is a fact although they cannot identify us and are not agents of a totalitarian state.

What is the crux of the matter then?

We are having less and less control over our life choices – choices that define us as social beings – and over how we are perceived and judged by others. You can say that we are losing control over our social mask or face we wear to contact with the world. And that face is an integral part of our identity.

Paradoxically, the more effort we put in creating our image and in sharing personal information, the more fuel we provide for behavior analysis. By observing our behavior and comparing it with the behavior of millions of other people, internet tycoons learn our traits. Acting with great accuracy, they are capable of discovering what I really think and what my deepest motivation, desire and fear is. Based on their research, they make their own assessments and they judge how reliable or attractive I am. My own opinion regarding those issues doesn’t really matter, because banks or my future employers would rather trust an assessment that is based on data. Not the data I have made available, but the data resulting from the analysis of my behavior. Big companies do not care about what I declare; they are interested in my real emotions and behaviors on the internet.

Coming back to the totalitarian threat: can we imagine all this being centrally planned and politically controlled? We sure could if we lived in China. But in Europe? I want to believe that there are lines we will never cross and that we are smart enough to nip similar projects in the bud. It is obvious to everyone, even to Mark Zuckerberg, that we are not going to create global democracy on the internet or anywhere else.

Unfortunately, we have relinquished the power to big corporations. As a consequence, we have no influence on whether it will be used for political interests or not. That will depend on whether dominant digital corporations are willing to cooperate with the states – claiming for example that they want to improve a certain domain of social life – or not. In my opinion, that scenario will be first tested during the process of introducing autonomous vehicles. After it is agreed that self-driving cars are safer and more eco-friendly than the ones driven by people, it may be only a matter of time when it will be obligatory to use that technology and to subject that area of life to full digital control.

Some say that global tech companies are stepping up their presence in the domains that were once reserved to the political sphere. They are getting interested in more and more sectors, e.g. medicine or public transport. And since the governments cannot keep up with law-making and seeing the big picture, they have to step aside…

… or take benefit from it. They could say: “Oh great! You have come up with a solution. That’s excellent because we’re having an epidemiological problem in our country!”. Alternatively, they could say: “We don’t know how to manage our public transport. Please help us”. They could also give into pressure from citizens saying: “There are companies, algorithms, and smart software – why aren’t you making use of those?”.

I agree with the opinion that the states have found themselves in a difficult position and that if we do not reject the paradigm of economic growth which today is propelled by data, they will become even more dependent on tech companies. They are not capable of managing data efficiently enough. They do not have enough data to develop their own solutions. There is also the third option: they can make the knowledge created and controlled by private companies available to the society. But this is a difficult regulatory project that would require a lot of courage and determination to openly take on global corporations. Seen from the perspective of a nation state, the project seems foolhardy, but the European Union could try to implement it.

In this geopolitical game dominant technology platforms, like Facebook, Google or Amazon, are on par with the administration of the biggest countries. Obviously, whenever they refer to their role, they say they are impartial. Even if they don’t openly support one political party or another, they surely tend to secure their interests, which directly affects our economic and social situation. They support capitalism because they are its driving force. An outstanding analysis of their business model by Shoshana Zuboff [Editor’s note: professor of the American Harvard Business School, author of “The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power”, 2018] has proven that dominant platforms create the system of government to which politicians and public authorities have to adapt.

That’s a disturbing thought.

Unfortunately, political power means nothing in confrontation with capitalist surveillance. But as I already mentioned, the third option, i.e. the path between the rejection of growth based on technology and the submission to the power of private corporations, is still available. It is about making key resources, such as social networks and knowledge generated by commercial algorithms, available to the society. The governments won’t handle the issue by themselves. But today operation of technological platforms is conditioned upon our work and our partaking in the system. We don’t have to be by-standers. We provide data, we elect politicians, we choose services we want to use. So we can also choose a different model. A model in which we would have a say, in which we stop being an exploited resource and a target for marketing experiments.

There are two divergent approaches to the digital revolution. The first one, shared for instance by Shoshana Zuboff, is critical and pessimistic: we live in a capitalistic system of surveillance and control. The second one is full of enthusiasm for innovation, which makes you think of solutionism – a belief, mocked by Evgeny Morozov, that technology is a cure-all.

One day we will see smart and pure AI that will arrange the world for us.

Exactly. But there has to be a more moderate approach, right?

Some things simply cannot be settled with algorithms or regulations. The technology that has developed over the past 20 years is undergoing a crisis. We have realized that our freedom of making decisions regarding important areas of our social life has been limited. And I’m not talking here only about targeting political messages, a risk of being disinformed or effects of how information bubbles affect our actions – although these topics are making headlines on a daily basis. What I also want to say is that technology is having extensive control over our lives, that we are getting addicted to it, that we are becoming less attentive, and that we are wasting our time on constantly clicking and making ourselves available to others.

It’s a very intimate experience; each of us feels that loss in their own way. At the same time, we are pressured to stay visible and be liked on the brutal market called “community”. In this digital ecosystem we all experience the same precarious situation. We cannot pay money for what is offered “for free” – internet platforms do not want to see as customers enjoying their full rights. They want us to keep providing them with information about our behavior which, thanks to algorithmic processing, is changed to valuable knowledge about people.

The business model being the basis for digital services has to change; it needs to start treating people as humans and not as resources that can be exploited with impunity.

The crisis of confidence between users and platforms is being aggravated by disappointment with our collective actions on the internet. It is obvious to everyone, even to Mark Zuckerberg, that we are not going to create global democracy on the internet or anywhere else. But it turns out that we may use the “connecting” technology to fight with, terrorize and blackmail each other.

When we think about a modern panopticon, we tend to blame technology. However, it seems that the real problem lies with business models. Can the way technologies serve us be changed to less invasive?

Of course. Technology is only a code, a tool. It can be designed to do various things. That is also the core of my general idea to build society-based infrastructure, meaning such resources, social networks and databases that we, as citizens and not service consumers, would be able to control.

The same principle applies to algorithms. I haven’t developed a sense of fatalism about introducing neural networks resulting in the destruction of mankind. Implementation of neural networks may prove beneficiary provided we use them to solve problems that can be solved based on statistical analysis and pattern detection methods and not to make moral or political decisions on our behalf. During the learning process we can also introduce rules that we find important, for instance “optimize road traffic but remember to give way to pedestrians”. We mustn’t forget that goals, values and legal principles should be defined and protected by humans.

I agree that the problem lies in the business model or in whatever drives it. And it is predominantly driven by profit which needs to grow exponentially and which will never be high enough to say “Stop!”. I am not against the idea of making profit. But I can’t accept a system where the profit is the only value, while all other things, namely the needs and rights of people, are qualified as costs. Costs that have to be brought down to a minimum or, better yet, excluded from the company’s budget.

Today, the most common approach is: you are a resource which we are going to exploit.

This model is going to crumble. Note that digital technology has also an environmental dimension. All that development is not taking place in a vacuum; it is consuming Earth resources. If we calculated the environmental cost of publishing an ad on the internet, the cost of our searches, Uber services or other tools we use several times a day, we would understand that they are very energy-consuming processes. Not to mention cryptocurrency mining, which globally consumes more electric power than a medium-sized country. In our everyday life, we don’t bother our heads about it as we can’t see cloud data being processed on remote servers. We can see only that tiny little device.

The business model being the basis for digital services must change to a more sustainable one. It needs to start treating people as humans and not as resources that can be exploited with impunity. And it needs to take account of environmental costs.

Can we make that change happen if we exert influence as consumers?

Theoretically yes, but I’m not sure if we will be able to organize ourselves. Are we ready to forget about being comfortable and take a risk to exercise real pressure on dominant platform? I don’t know.

Maybe we should rely on regulations?

We are in the middle of an interesting experiment with GDPR but after one year it is difficult to assess what the results really are. Zuboff would probably say that it is a minor adjustment and that soon we will discover that companies have already figured out how to circumvent the restrictions and that we will need 15 years to “regulate” such practices.
I haven’t developed a sense of fatalism about introducing neural networks resulting in the destruction of mankind.

The vision where big corporations are always able to circumvent the rules in force is very pessimistic. Is it correct? I don’t know yet. I need three more years to see how GDPR will have changed the market and to test the rights arising from such regulations. Each of us can exercise them and I would strongly recommend to do it.

Telecommunication experts claim that soon it will be impossible for the mobile network in Poland to handle more traffic. There is not going to be the Internet of Things without the 5G network. But your report called “Network tracking and profiling” is rather critical about that technology: “Systems of connected devices operating within the Internet of Things may become the biggest and unprecedented threat to the privacy and, in some cases, e.g. smart vehicles or medical equipment, to the physical security of users.” A similar opinion is expressed by Bruce Schneier, a cybersecurity expert, who has recently called in “MIT Tech Review” to slow down a bit with connecting everything to everything. Should we be skeptical about the Internet of Things?

Whenever we want to discuss issues related to IoT, machine learning or algorithms, it would be advisable to first determine our goals. What problem do we want to solve with such tools? What would we need them for? If the goal is to spend more money on digital services, then we should ask a serious question about the price and environmental scope of all that. Someone has to pay for that, right?

If we consider Chinese investment projects, we cannot ignore geopolitical implications and the fact that we will become technologically dependent on an unpredictable actor. If that developmental leap is to be financed with public money, it means that we won’t be able to finance other investments, e.g. in the domain of transport or public education. So, if the goal of the project is to serve consumption purposes only, I don’t see any grounds for implementing it.

I don’t say that development of the Internet of Things won’t help us to solve our real problems. It may prove useful in supporting our healthcare system, caring for the elderly, and optimizing waste management or recycling processes. But I haven’t heard any discussions about those issues yet. What’s more, before that technology steps up their presence in our family life and relations with the state, we have to first secure it and develop reliable control mechanisms for data transferred between connected devices. At present, in the case of smart household goods and toys, that condition is far from being met.

Technological experiments which respond to the real needs of people are necessary. But they should be also designed by a so-called average man and not only by a corporation who has a certain picture of that man. We need a society-based process in which citizens would have their say in deciding what technologies should be financed with public money. If making profit remains the only goal, nothing will change.


*Katarzyna Szymielewiczlegal practitioner specializing in human rights and new technologies. She is a co-founder and the President of the Panoptykon Foundation and the Vice-President of European Digital Rights.
She graduated from the Faculty of Law and Administration, University of Warsaw, and Development Studies, School of Oriental and African Studies. She worked as a legal practitioner in Clifford Chance law firm. She was also a member of social councils for the Minister of Digital Affairs. She served internships organized by Ashoka, an international network promoting social entrepreneurs.

Skip to content