There is a borderless system of technology and we’re trying to use bordered laws to deal with it. I wonder if the authority of the state will be able to get anything done anymore – says Jamie Bartlett, the author of “The People vs Tech”, in conversation with Maciej Chojnowski

Maciej Chojnowski: I’ll start with a quote: ˮOne way of defining technopoly (…) is to say it is what happens to society when the defenses against information glut have broken down. It is what happens when institutional life becomes inadequate to cope with too much information. It is what happens when a culture, overcome by information generated by technology, tries to employ technology itself as a means of providing clear direction and humane purpose. ˮ

Jamie Bartlett*: Neil Postman’s „Technopoly”. A brilliant, incredibly clever book. It was written in 1992. Funnily enough, I read it after I wrote „The People vs Tech”. I’d love to know what he’d have said about what’s going on now.

His description of information overload and inability of institutions to respond to that, especially in education, is to me exactly correct. I think people misunderstand that the pure nature of information overload is so much part of the problem of modern politics. A lot of people don’t seem to appreciate that we‘re simply not born nor trained to deal with that level of information.

I’ve actually been going around, telling everyone to read „Technopoly”. Postman has summed up the problem of information processing by our brains brilliantly.

He was often labelled as a Luddite by his opponents. Did you ever face this sort of accusation?

I did yesterday.

What did you reply?

That the Luddites were actually right about a lot of things. The idea that they were just angry about the machines, so they smashed them up because they didn’t understand them, is not true. They understood exactly what they were and knew what was going to happen to them as a result of it.

If you look at the history of the following 50 years after the Industrial Revolution, the situation got worse for lots of people. They went from skilled artisans to piecemeal workers in factories. Luddism was not a blind hate of any technology. It was a fear that specific types of technology would make your life worse. The Luddites had no means of expressing that. They couldn’t vote, had no universal suffrage, so what other options did they have? They could only smash those things up.

So sometimes, when people call me a Luddite, I say: „You know what? They had a point!” Luddism was the fear that the machines don’t work well in society. That they’re taking people somewhere quite bad. I use technology in so many ways. But it doesn’t mean I can’t be critical of some aspects of it.

In your latest book „The People vs Tech” you use the term ‘dataism’. It describes a situation where algorithms influence many – not to say all – aspects of our lives. But to lots of people algorithms are useful. They can download apps and control their diet or sporting activities. Do you find algorithms dangerous in general, or maybe there’s a certain threshold over which we lose our free will?

Dataism, as I understood it, is not just about the rise of algorithms. It’s the belief that with enough data you can solve society’s problems. Almost like a religious belief in the power of machines and numbers!

It’s incredible how much we now outsource to algorithmic calculations. The problem I have with it is especially about politics, particularly because the life of an active citizen should be thoughtful and require making complex judgments about the moral questions. It’s important to maintain that habit of critical reasoning. There is a danger that the more we rely on algorithms, even in our personal lives, the more we stop thinking critically in that way.

It isn’t such a big jump between asking Amazon algorithm what book should I read next, and asking a machine what political party I should vote for. That bothers me most.

You also describe the rise of tribal approach in the Internet era. It’s dangerous because it results in fragmentation of our political identity. According to you, the Internet is „the largest stock pantry of grievance in the history of mankind”…

You’re the only one that’s mentioned that! It’s an important part of the story.

I like the concept and the phrase! But on the other hand, we often hear about something seemingly opposite: the wisdom of the crowds, the hive mind. Companies like Google and Uber successfully created campaigns of fear encouraging people to protest against the attempts to regulate these companies. And people got engaged in defending their rights to information or services, whereas in fact they were defending these companies. So can we be divided and united at the same time? Tribalism versus this sort of unity?

Obviously tribes are united, too. I don’t think we’re all totally fragmented. We’re clustering in certain warring, opposing groups, where you defend your side no matter what.

The people who control the technology have an immense power over the tribe. They can manipulate their own supporters in very clever ways that we don’t fully understand. All Google needs to do is put something on their front page and it can just kill a piece of legislation.

What if Google and Facebook has a liberal bias? What if they’re subtly changing what information you receive during an election and no one really knows? Maybe they’re not doing it on purpose. Maybe it’s just accidentally happening without them even knowing it. And without us knowing it, too.

And that goes against everything we believe an election should be about – fair information, equally distributed. You can’t do that anymore.

Tell me more about the Internet as this „stock pantry of grievance”.

In every country I look at everybody now feels like their group is hard done by. Everybody has huge amounts of legitimate grievance that they constantly share with each other. Not just fake stuff.

Everybody can express themselves. Free speech is good but the global „Speakers’ Corner” is difficult to stand in the long run. Everybody has an opinion and it’s usually not well-balanced.

I think with Facebook and other platforms, you end up disliking pepple that you used to quite like, which is a really negative thing to happen.

Admittedly, I stopped following some of my friends on Facebook.

The Silicon Valley bosses never understood human nature. They have this rather mechanical view: the more you connect, the better you’re going to get along. That’s how machines work, not humans.

Think about what Facebook and Twitter actually ask you about. They just encourage you to put what’s on your mind. But it is rarely a well-thought out idea, so they’re literally asking you to just splurge out the first thing that’s on top of your head.

Stream of consciousness.

Exactly, that’s kind of what they want you to do. There are obviously examples where people write long, useful posts that help me understand things. But the question is: what’s the overall effect?

Let’s talk about the middle class. It used to be the engine of the modern economy but now it’s in decline. The rise of AI and automation will put many middle jobs at risk. You claim that we shouldn’t be afraid of mass unemployment but rather of increasing inequality. What’s your scenario for the future of work?

The one I’m mostly worried about is David Autor’s concept of barbell-shaped economy. Well-educated people, who are very good at using technology, become wealthier and wealthier. With AI, which is a general purpose technology, you can become an industry leader in lots of different sectors of the economy. You don’t need huge factories to do it.

Then there are middle jobs which – based on various predictions – seem to be possibly under threat. They sometimes call it cognitive routine. So mind work but relatively predictable. Paralegal assitance, human resources, all these kinds of things. And that big chunk of society which is seen as sort of middle classes, gets fragmented. Some of them go to high level areas and get sucked into the tech sector. And they become very liberal, cool, and well-paid and more distant from the other half, which gets dragged down into the piecemeal, software based service sector, which is mostly servicing the rich people.

It isn’t such a big jump between asking Amazon algorithm what book should I read next, and asking a machine what political party I should vote for

These jobs usually aren’t very well-paid, because there’s so many people who can do them. They’re quite insecure, they’re not particularly well-unionized. That’s really bad, because if you don’t have a strong middle class with shared values, you end up with more division in society.

The tax raising is also a problem, because poor people can’t afford to pay as much tax and rich people learn how to avoid it. People in the middle class are the ones that pay all the tax. If they get fragmented into these two groups, then you’ll find taxes harder to raise. People at the bottom are gonna get more pissed off. And people at the top are gonna get more liberal.

People get more and more divided. No healthy democracy really survives beyond a certain level of inequality.

In your book, you’re going even further: you say you’re afraid of social unrest as a result of this divide.

San Francisco, the home of all of this, is an incredibly divided city. There’s already constant protest about what’s going on. And I’d be staggered if we don’t see a return of some sort of Luddism. I’m quite sure that is what will happen.

What do you think about the Silicon Valley solution to all this, namely: the basic income.

I’m not a fan of that at all. I think that’s an interesting idea, because it sort of unites the far left and the libertarian right, which is quite unusual. But I think it’s a terrible idea. It’s the same problem: the Silicon Valley people don’t understand what motivates humans. Give them some money, and they’ll be happy, cause they’ll be able to write poetry.

Or become programmers.

Or maybe they’ll just sit around, and be cool and have fun, develop their own interests and stuff.

But this isn’t what people on the street think. People want to feel that they’re doing something constructive, valuable. That they have a sense of purpose. It not only gives them structure in their day, but makes them have some sort of honour.

Work is so important to people.

In a fundamental way! People who have the idea of basic income are always people who will carry on working. I don’t see any poor people asking for this. Would you be happy living in a society, where loads of really rich people had amazing jobs, purpose and structure, and you’d just be given money by the government to sit around and do whatever you wanted? You’d be furious about that!

Sensible politicians are talking much more about career re-training. If you’re gonna have tens of thousands truck drivers out of work, then some serious investment is needed. You have the right to be re-trained and the government should pay for that. But what do you have at the moment? Really nothing. It’s not considered important. So I think people who talk about universal training income are right. The problem is we just got to find out a way to pay for it.

Especially when the biggest companies don’t want to pay taxes.

Yeah, so it’s annoying when they talk about basic income. Great, but how are you gonna pay for it if you don’t pay your taxes?

It all doesn’t mean that I want to stop the technology. I’d just prefer that we make sure we’re ready for the turbulence that might come with it.

Except for the gradual erosion of the middle class that we’ve been talking about and the growing social inequalities, there’s another threat you mention in your book: the decreasing influence of nation-states on global socio-economic processes. What’s to blame? The technology itself or the neo-liberal paradigm, in which it exists and operates?

Technology is created according to a model that already exists. In China the technology is not eroding the power of the state. The opposite, it’s really helping them do things. The existing authoritarian regimes in the world might actually find this technology helps them be even more powerful, centralized, and control their people even better.

In the liberal part of the world I think it erodes the state’s ability to get things done. Every policy idea is disputed, debated, no one can agree on it. The possible danger is that the authority of the state actually can’t get anything done anymore, because they don’t have the rules, they don’t have the powers, they can’t cross borders. The authoritarians find it easier.

Democracy works when you have a small community of people within inside a border, because you can then have a citizenship that can enforce laws. Digital technology doesn’t respect borders, so that doesn’t work either. And there’s no real solution to that

So I don’t think it’s just about the neo-liberal fault. Of course, partly it is but you have an international, borderless, system of technology and you’re trying to use bordered rules and laws to deal with it. So I actually partly blame governments, cause they’re not changing quickly enough.

The world is transforming but we’re still doing exactly the same thing we’ve been doing for twenty five years.

I find your concept of analogue democracy especially interesting. Do you think democracy clashes with digital world because humans are essentially analogue?

That goes underlying all of this book. It’s the uncomfortable possibility that democracy may not be a system of government that is well-suited to this type of machine age. Maybe everything I’m proposing is just trying to patch over this underlying problem.

Democracy works when citizens are capable of understanding how judgments are made. The machines are so complicated today that you can’t do it anymore. Democracy works when you have a small community of people within inside a border, because you can then have a citizenship that can enforce laws. Digital technology doesn’t respect borders, so that doesn’t work either. And there’s no real solution to that. Apart from some horrible one world government, which people don’t want.

So underlying it all there’s an uneasy feeling that maybe democracy is just not the right form of government. Maybe it doesn’t work, or maybe we’d just have to revisit what we understand democracy really to be.

One of your prognoses is that in a couple of years we will witness a few tech companies takes advantage of AI and smart products and creates the biggest cross-sector monopolies that have existed. And like banks, they will become inevitable to our economy. Can we avoid this situation? Should we break up digital behemoths?

The nature of digital technology tends to monopolization, because the more data you get, the better your product. And the better your product, the more data you get. Inevitable.

Exactly the same tendency towards monopoly is going to be true with AI and Internet enabled devices. The only difference is that it’s just cross-industry. I mean, Google is already a world leader in about six or seven different industries. So is Amazon. Why would you go to a local company based in Poland that doesn’t have the same computing power, when for cheaper you can get the best one in the world from Amazon?

This problem is not easy to be overcome. I don’t think breaking them up is gonna solve it. And how is Poland or the UK gonna break up Google?

It seems rather impossible.

I’m annoyed about the UK leaving the EU because the EU does have enough power to do something. The UK has to remain very closely aligned when it comes to digital policy, because the EU seems to be setting out a slightly alternative path to China and the U.S. It’s a good path, as far as I’m concerned. So it’s not only about breaking them up, it’s about stopping it from getting worse. This is really about the prevention of takeovers and mergers. We have to start putting in place rules to stop big tech companies buying up all competitors when they’re small.

We have to try to do more to encourage domestic startups. A local version of Uber, that’s run by a local authority, where the profits go back into the community. It’s not that hard, anyway. It might not be quite as good as Uber, but it would be a better service for people to use.

In the next three to five years, there’s gonna be all sorts of stories about your connected fridge going into meltdown or about your baby monitors being hacked into by someone. All of these devices in your home might be getting hacked into

The same goes for health services. I think this is the big next problem. Health data is incredibly valuable now and there’s a real danger that everyone’s just gonna give their health data to Google.

We have to encourage startups to create their own products and not just let Google do it. Otherwise, in ten years time, they will also dominate the health service.

You often say you can’t understand the reason why people want to connect everything with everything. To many people it’s like a blasphemy against the progress but unexpectedly support comes from security experts, some of whom claim we’re not ready for the challenges of the complex Internet of Things ecosystem. Do you believe we can slow down or regulate this technological progress or implementation of certain inventions? Or is it more like a force of nature?

Yeah, it feels like a force of nature, because it’s getting cheaper and cheaper to connect things. I genuienly believe that in the end non-connected devices will be more expensive than connected ones. Unfortunately, I think that millions of people who connect to everything we’ll have a series of catastrophic problems that will turn them against it.

And it might be too late.

Yes. In the next three to five years, there’s gonna be all sorts of stories about your connected fridge going into meltdown or about your baby monitors being hacked into by someone. All of these devices in your home might be getting hacked into.

Jamie Bartlett during the Big Book Festival 2019 in Warsaw

Domestic abusers can try to use these technologies to manipulate and control their partners more. It’s just gonna create problems and frustration. So unfortunately the only way I see out of this is a series of bad problems making people start turning against this.

What else can you do? Regulate? Governments wouldn’t be able to ban things. Smart devices not allowed at home anymore? It’s never gonna happen.

In your previous book „Radicals” you name three megachallenges democratic states are facing. These are: technological change (that’s what „The People versus Tech” is about), climate change and attitudal change towards democracy. Do you think there is a relation between these three factors or should we consider them separately?

When I wrote that, I probably saw them as separate things. But increasingly I see them as part of the same problem.

One of the reasons we’ve been so slow and incapable of dealing with climate change is partly because we’re obsessed over tiny things, constantly clashing, arguing and debating. And climate change has become not a question of science, but a sort of identity-based form of politics, which is ridiculous. And I think how we deal with those problems has been partly a function of technology.

It’s the same with attitudes towards democracy. I think the further we’ve got away from World War II, maybe even the fall of the Berlin Wall, and the more our personal lives are based on total personalization, the more democracy looks less and less like it really works with people. And then, conversely, in case of a megachallenge like climate change, which is gonna result in rising sea levels, drought, crop failure, millions of people moving, we will look to the machines to solve these problems for us.

So the things are related, because those problems will lead to a growth in this kind of dataism, the philosophy that the machines are gonna fix it for us, cause we don’t know what we’re doing. So they are related in both directions.

To conclude with a little more optimistic view, in the epilogue to your latest book, you give 20 ideas to save democracy in the digital era. Where should we, regular people, start saving democracy?

It really comes down to individual’s responsibility. I think there’s quite a few things. If you think your great duty as a citizen is to vote that’s a very narrow understanding of your responsibilities.

When you’re bombarded in the attention economy by all this conflicting information, it seems like it’s equally part of your duty as a citizen to hold on to your attention and to carefully look after it, and to take active steps and measures to make sure you’re an informed, thoughtful, discerning citizen.

It might actually mean turning your phone off at 10 P.M., only checking your Facebook once a day. When you go to a forum of people you disagree with don’t immediately respond to things that you see. Think about them, read them carefully, wonder where they’re coming from. Try to understand that.

It also means rather than using the most convenient search engines, try other search engines to make sure there’s not so much reliance on one company.

We all have citizens’ duties relating to your digital technology use as important as voting. It’s very similar because most citizens don’t vote and think: I vote, because it’ll change the election results. They do, cause they think it’s their duty.

If you don’t feed the machine with your likes, shares and clicks you can concetrate on important ideas. It’s your duty as a citizen. And if everybody else does it, we’re getting somewhere.


Jamie Bartlett is one of the UK’s leading thinkers on politics and technology. He led the Centre for the Analysis of Social Media at Demos for 10 years until December 2018. He is the author of The People Vs Tech (2018), Radicals (2017) and the best-selling The Dark Net (2014).


We would like to thank the organizers of Big Book Festival 2019 for their help in arranging the interview with Jamie Bartlett.


Przeczytaj polską wersję tego tekstu TUTAJ

Skip to content