In a world of superhumans with computer implants and all knowledge at their fingertips, what it means to be human will change. The nature of human rights will have to evolve too – claims Roch Głowacki, a lawyer who advises on AI and new technologies, in an interview with Anna Zagórna

Anna Zagórna: Looking at what the European Union is doing about AI regulation, can we say that we are going in the right direction?

Roch Głowacki*: For the past two years, the European Union has been working intensively to establish Europe’s leadership in the field of ethical artificial intelligence. GDPR, despite its imperfections, has become Europe’s export product and has provided the European Union with an opportunity to set regulatory trends across the world. Naturally, the EU’s ambitions in the context of AI are very similar and if we, Europeans, strive to uphold values such as respect for human dignity, freedom, democracy, non-discrimination, tolerance and justice, then the direction of travel is right. It is also a natural continuation of European humanistic traditions. In any case, to maximise Europe’s ability to exert influence on the international scene, the actions of EU Member States should be coordinated.

In Europe, most of us are already accustomed to the GDPR. But will Artificial Intelligence engineers also get used to it? Some commentators argue that the rules in Brussels are preventing European technology companies from designing bold new solutions. Eva Kaili, MEP, among others, warned against the rest of the world leaping ahead of us. What should be done about this?

I agree that some provisions of the GDPR may be difficult to reconcile with the ever-increasing ethical expectations of artificial intelligence. For example, the principle of data minimisation potentially interferes with the desire to ensure that decisions made by AI systems are verifiable.

One way to check whether an AI-based decision is correct could be to audit the data propelling such AI system. However, according to the data minimisation principle, the system should not store hoards of data indefinitely.

In the future, AI-based chatbots and other products and services are likely to contain warnings or labels such as “powered by artificial intelligence” or cookie-like alerts indicating what type of AI is at work

There are also other solutions that can help verify whether an AI system has made an error, without trawling through gigabytes of data or interfering with AI software provider’s intellectual property.

How is the GDPR influencing our approach to regulating artificial intelligence?

We should not forget that it took many years before work on the GDPR was complete. In 2012, the European Commission proposed a reform of the EU’s data protection rules of 1995. The GDPR came into force only two years ago and our search for European unicorns (i.e. start-ups with a market valuation of at least one billion dollars) has been going on for much longer. Therefore, I don’t think that we can blame Brussels for all of the weaknesses of the European technology sector.

Roch Głowacki

The GDPR provides the foundations for some of the future AI rules. The introduction of new regulations will be a long and tedious process. Some of the principles enshrined in the GDPR, as well as those concerning, for example, responsibility for AI-powered products and services, will evolve over time.

However, it is important that EU legislators continue to engage with the public through various consultations and leave certain issues to the experts. An example of how the European Commission can react to the feedback it receives from the market is the recent shift in its approach to facial recognition. The draft AI White Paper proposed a five-year blanket ban on the use of facial recognition technologies in public places. This radical solution would have provided the EC with more time to decide how to regulate this particular technology. However, (probably) in response to critics, the ban was dropped from the final version of the report (released earlier this year in February).

The European ethical AI is meant to become our export product. However, aren’t we likely to hinder AI innovation by imposing further restrictions?

This concern is understandable, but it is premised on the assumption that an ethical approach to AI conflicts with and has to stifle technological progress. The recent AI White Paper indicates that regulations in this area will be about ensuring that data flowing through the economy, e.g. the financial services or the healthcare industries, is legitimate, high quality, accessible and verifiable where appropriate. By analogy, the rules on anti-money laundering or requirements for labelling products and informing consumers about the side-effects of certain drugs have, at their core, very similar objectives. These rules are designed to help build public trust, protect markets from rogue behaviours and enable consumers make informed decisions about products with different qualities.

Is it at all possible for AI to be completely transparent and explainable?

The problem lies not in the ethics-first approach, but the lack of understanding of the underlying technology. Some solutions are designed using a so-called ‘black box’ approach. For these types of solutions it is impossible to understand or identify the factors that influenced such system’s decision.

It doesn’t make sense, of course, to require every type of AI to be 100 percent transparent, but the designers of these systems also have other types of models and programming practices at their disposal that are more easily explainable. Both types of solutions (those more and less transparent) can coexist in the market and hold, for example, different tiers of a “transparency certificate”.

The ethical assumptions provide a sense of direction and a framework for the debate about AI to take place. They are not a problem in themselves. However, problems arise where legislative proposals aim to tackle technological issues in a cursory manner without a sufficient level of detail. An example of this is the new Copyright Directive. When it hit the headlines, many commentators overlooked the fact that the new text contains important articles relating to text and data mining, a key process in training AI systems. Unlike the United States (where the act of normalising copyright-protected data is generally considered to be covered by the principle of ‘fair use’ i.e. allowing the use of copyright-protected material without the consent of the rightholder and without remuneration) and several Asian countries (which have robust copyright exceptions), the new directive provides an exception to copyright from which rightholders can opt out in an ‘appropriate manner’. Unfortunately, it is not entirely clear what this is intended to mean. The directive provides an example that in the case of content that has been made publicly available online, it should only be appropriate to reserve those rights by the use of machine-readable means. There is also a risk that Member States may differ in the way this exception is transposed into their national laws.

It remains to be seen how this opt-out mechanism will affect availability of datasets in Europe. However, AI developers need to be aware of intellectual property rights of those who do not want their materials to be used for AI training purposes. The lack of clarity could prove to be an unnecessary obstacle for technology companies of all sizes.

Can the law influence the emergence of an ethical AI at all? How would this look like?

For centuries, law has been used as a tool to strengthen and enforce society’s moral and ethical standards. It will play the same role in shaping our relationship with artificial intelligence. EU expert group’s ethical guidelines for trustworthy AI have paved the way for further discussion and inquiry into regulating this area. The guidance document published last year includes a set of seven key requirements for designing “trustworthy” AI. It includes principles such as that AI systems should be transparent, i.e. allow for the identification of factors that influence their decisions. This is particularly important where we need to understand why an AI decision was erroneous. AI solutions should also include appropriate redress mechanisms and facilitate the reporting of negative impacts.

In one of the possible futures described by, for example, Yuval Noah Harari in his book “Homo Deus”, human rights will probably need to undergo another metamorphosis, if such a future was to materialise

In practice, this means that in the future, AI-based chatbots and other products and services of the future will probably contain warnings or labels such as “powered by artificial intelligence” or cookie-like alerts to indicate what type of AI is at work. An entire new sector is likely to emerge, one that is focused on the evaluation and certification of the transparency and reliability of AI solutions. There may also be new types of certifications with the “AI made in Europe” brand becoming recognisable worldwide. We are also likely to see the arrival of GoCompare- or Trustpilot-type of platforms for comparing and evaluating AI systems.

Should Member States create their own artificial intelligence regulators? Or should they rely on EU’s experts for this? Who should certify AI products? The state, the EU, or both?

Regardless of the extent and the specific types of AI that become subject to regulation, enforcement and supervision should be carried out by regulators whose duties and powers are clearly powers. Certainly, it is currently unclear whether sector-specific regulatory bodies should establish their own AI divisions, should there be one national overarching AI supervisory authority, or should these competencies sit within the current data supervisory authorities. This is important as national AI governance structures will dictate the ease of doing business between countries and could affect the functioning of the data market. 

And what about unethical AI? Who should be responsible for it?

This is one of the most complicated issues. A software developer does not necessarily have control or visibility over how a given component will be used by the manufacturer, let alone by the end-user. Nor will it always be possible to determine who in the supply chain made a mistake. The level of responsibility could depend on many factors, including the degree of cooperation between the developer and the manufacturer or the nature of the risks involved. For example, where a software developer and a manufacturer work closely together on a new medical device for treatment or diagnosis of, say, cancer, the requirements should arguably be more restrictive. If such device fails, in order not to leave the patient without the ability to seek redress and compensation, and to avoid the need for investigating which individual component (hardware or software) is responsible for the failure of the AI-based device, the manufacturer could be required to maintain adequate level of insurance. In fact, the Committee on Legal Affairs has just published a proposal for an EU Regulation on the liability of deployers of high-risk AI systems. This includes precisely the type of considerations I have mentioned, strict liability for deployers and minimum insurance cover requirements. Over time, new categories of insurance coverage are likely to emerge, where premiums could depend on of the level of ‘predictability’ or ‘effectiveness’ of the relevant solution.

In the new world to come, should human rights be redefined? Should we demand, for example, the right to brain privacy (thoughts, memories) or the right to equal access to technology? Are we soon to face the challenge of defining something what one could call ‘the human rights of the future’ or ‘the rights of the superhuman being’?

Your questions are actually not as futuristic as you might think. As a consequence of social and technological progress we are constantly expanding and updating our understanding of human rights. As you know, the European Convention on Human Rights is almost 70 years old, and the rights it envisages are as relevant today as they ever were. Key concepts such as the right to liberty and security of person or freedom of thought, conscience and religion will be subject to new interpretations. An example of this is the right to have ones’ data deleted (‘right to erasure’). This so-called “right to be forgotten” was recently created by the GDPR and is a modern interpretation of the right to privacy.

Some of the provisions of the GDPR may be difficult to reconcile with the ever-increasing ethical expectations of artificial intelligence

I think that in the near future we will have more clearly defined “rights” such as the right to be informed that certain decisions were taken by an algorithm (rather than a human being), that we are talking to a chatbot and not a human being, and the right to question the results of algorithmic decision-making, or the right to know the factors that influenced such decision.

How else will these rights be useful?

They will be particularly important in building public confidence and trust in AI-based solutions, particularly in a world where drugs are prescribed by robo-doctors, loans are granted by robo-bankers, and candidates for jobs are selected by robo-recruiters.

For example, in one of the potential futures that Yuval Noah Harari writes about in his book “Homo Deus”, human rights will probably undergo another metamorphosis, (if, of course, such future ever materialises). In a world of superhumans with computer implants and all the information about ourselves and our environment at our fingertips, we will have to re-evaluate as to what it means to be human.

How will the coronavirus pandemic change our approach to regulating big tech?

To some extent, a number of initiatives concerning artificial intelligence (but not only AI) were conceived against a backdrop of the so-called techlash, a growing animosity towards technological giants. The technology sector has been grappling with deepfakes, fake news, hate speech, privacy violations, etc. for a while now. However, some argue that in the current political climate, the coronavirus is potentially slowly quashing the techlash. Therefore, our relationship with new technologies and the extent to which we regulate technology companies in the future will change – particularly, given that they will be indispensable in fighting the virus and helping us all return back to normal or the ‘new normal’.


*Roch Głowacki is a graduate of King’s College London, and a lawyer at Reed Smith, an international law firm in London. He specializes in new technologies and intellectual property rights. He has worked for a number of the world’s largest technology companies and regularly writes articles for leading artificial intelligence publications, including “AI Business” and “The Journal of Robotics, Artificial Intelligence & Law”.

Skip to content