Ethical artificial intelligence is to be the product made in Europe. An example? An intelligent vacuum cleaner or an intelligent fridge which can be trusted. An appropriate test will verify, whether all the rules were adhered to at the level of AI design, implementation, and usage. Maciej Chojnowski talks to Robert Kroplewski.
Maciej Chojnowski: The concept of artificial intelligence has been gaining popularity recently. What is it about? The ethics of artificial intelligence systems, the ethics of engineers who design these systems?
RK: As seen through the prism of ethics, artificial intelligence is a challenge when it comes to the relationship between intelligent assistants and the environment. In order for it to work, we have got to train it, supplying it with data and building models that allow these data to be used in the external environment. This environment can be human, nature, or machines.
It is not about AI systems being ethical, but their relation with the immediate environment. That is where the questions regarding the truth, facts and credibility start. We start to have doubts whether data analysis in the machine was done properly. If the data that was supplied (PL) came from the right samples. Whether there was no interference. Whether the recommendation of taking a particular action is risky for the environment, or not. And how well-thought-out it is.
We have various statistics which show that depending on the business type, the prediction accuracy of artificial intelligence ranges from 60 to 96 percent. There are continuing problems with it in medicine, for example. It works wonders in marketing.
This is why AI ethics are becoming so important?
Yes. The introduction of artificial intelligence into the sphere of social and economic relations presents us with questions about its positive, but also negative, impact. This is the moment, when artificial intelligence becomes a socio-political-economic challenge. Artificial intelligence needs to be controlled by a set of rules, regarding its proper design, development and implementation. That is where ethics comes in.
What can be done under the circumstances?
On the other hand, if we gave the decision-making power to a machine which is already happening in stock markets and currency transactions, then someone needs to be held responsible for these decisions. They are still human. We simply delegate.
Some had dreamed that artificial intelligence which resembles a human ought to be given citizenship. This is what happened in Saudi Arabia. What happens with the law and the ethical context? It turns out that for the robot-citizen to undertake action in the legal sphere, we would have to change international and private law, as well as treaties hammered out over the years by the international community.
This is why, we told the OECD and European Commission expert groups that it is the wrong direction. The global world order is set in such a way that the introduction of a robot that would have to have its interests recognized in every act of international law, would distort its stability and security – one of its main functions. And, for now, this robot with AI elements is an absolutely unpredictable entity.
If a particular economy says: ‘’No, we can’t be transparent,’’ then we know this product isn’t worth buying, because we can’t trust the seller. The side which will not be creating an ethically compatible artificial intelligence, will eventually lose.
Furthermore, there is another life between artificial intelligence and a human being, namely animals which also have some sort of intelligence. Maybe it would be best to first grant citizenship to animals? I speak a bit provocatively, but it’s not entirely baseless, because such ideas have recently arisen. This is why establishing proper proportions and updating the hierarchy of needs is necessary.
OECD has remarked that the entire planet is the setting, where artificial intelligence operates – not just mankind. All in all, AI will influence water resources, air. It can help clean the atmosphere, but it can also cause changes which are impossible to predict today. This is why the modern AI ethic states that new forms of artificial intelligence ought to be trained in sandboxes, i.e. testing them in isolation, before they are released to the telecommunications system or an IT network.
Some time ago, MIT has conducted a ‘’Moral Machine’’ experiment. The experiment revolved around people deciding what a self-driving car should do when it must choose between running over an elderly person – or a child. It turns out that the choice would be made differently in different parts of the world, because people respect other values. Can we then talk about the universality of rules? Will ethics not be made part of machines, along with all the local prejudices present in a given population?
It is a huge challenge. Is universalization possible? I believe so, but only in certain matters. In a group of experts, we determined that human dignity is the chief virtue.
It is really important, when we stand opposite to a machine which could simply determine that preserving human dignity is not that important, because it must be productive and efficient. The OECD success relies on the fact, that dignity is at the centre of what we are talking about – it is even bigger, given the fact that the United States is the OECD member. We have built a civilian transatlantic alliance.
There is a completely different situation in China. Social integrity and state effectiveness are important in China. Vast numbers of people are to be governed in such a way, so there is peace and economic effectiveness. China did not become an OECD partner, there is still work ahead of us, so for now the virtue of dignity is not widespread.
But we think it is going to happen eventually Why? The rule that derives from dignity is trust. That is why we have coined the term trustworthy AI in the European Union (PL), that is artificial intelligence worthy of trust. We have also come to the conclusion that even in China people are going to need to know whether to trust something, or not. A product must be created that can be trusted and which can be sold as such.
Is the idea of trustworthy AI a sign of caring for proper functioning of artificial intelligence, as well as an attempt to promote an original European product?
Some might want to treat ethics as a tool of protectionism, based on the assumption that we must quickly introduce ethics because economic relations are changing rapidly, capitalism is changing, status quo is getting disturbed, and we do not want that. A thorough discussion and negotiations took place in the matter. On one hand, trust in itself is wonderful. On the other – we cannot allow for perfectionism. As soon as we lock ourselves up in an ethical bubble, we will lose economically. And if we want to be a leader, then from this protectionist tool we went to an idea, that it is supposed to be a product. So an intelligent vacuum cleaner or an intelligent fridge we can trust.
Today, the AI ethics tells us to train new forms of artificial intelligence in so-called sandoxes, meaning testing them in isilation before they are released to the telecommunications system or an IT network.
In order to be able to implement it, we had to described technical but also non-technical ethical rules. They all must be implemented in AI systems. This is where the proposal of European Union experts to create a so-called verification pilot whether a particular system really is in compliance with the rules that have been proposed, comes from.
The verification pilot is still being designed. It will not be a certificate, but a compatibility test: everyone can check it and promote their solution under the ‘’Trustworthy AI’’ banner. We then venture out into the world , scaling, financing, and so on. Research clearly shows this could work.
In Poland as well?
We determined that it is only a copy of existing industrial relations, and we were after something else. We grew up to a point, where we can stop being a country colonized mentally or by the capital. The moment of emancipation is here. Industry 4.0., if that was the end, would simply solidify the current inter-reliances. It would then fall into innovative debt in the data economy. This is not what we wish for.
We would still be the subcontractor for other countries – only the technology would change, and the status quo, with all its complexities, would stay the same?
Yes. We are talking about a certain degree of economic sovereignty. The whole challenge revolves around the idea that, while maintaining sovereignty, we want to be open, build alliances, in which partners can cooperate. I believe we are making this work.
All our initiatives regarding the free movement of data inside the European Union, but also beyond, as part of the World Trade Organization, are materializing now. A great initiative started by Japan to create a trusted data space on a trans-border scale. This is our Polish dream and Japan has answered the project cooperation call.
This universalization is going to continue to be implemented. It is possible for such crucial rules, such as transparency, to become quite common, even adopted by the U.N. If a given economy tells us: ‘’No. We cannot be transparent,’’ then we know this product is not worth buying, because it cannot be trusted. Indicators show that the side which will not design ethically compatible artificial intelligence, will eventually lose.
The European Union is a highly valuable market which will soon dictate conditions. Take RODO, for example. Maybe its implementation isn’t the best, but it has merit as an idea. Now we are going to have a free data flow and artificial intelligence. And just over the horizon is something which – I hope – will also be a pivotal talking point in Polish politics, namely the common standard machine interoperationality. It is about the cooperation of production systems in an intelligent network. This will form the basis.
All things considered, ethics and the letter of the law are two different worlds. It has a nice ring to it: recommendations, good practice. But all these rules still must be implemented.
I have been saying it for years: technology develops faster than the law. This is how it is. Those, who talk about the law always regulating something, will never manage to build any values with this law, values that are friendly for our community and form the basis of fulfilment in life. This is why we need smarter thinking. The law is really important, of course. Without it, many things can go awry. Complementary action in legal matters is faster if there are others who are willing to do it. We can incentivize this approach.
We have a multitude of tools at our disposal which could support the process: a budget law, financing programs, concepts bordering on law and technology, for example – an appropriate solution for sharing data. The state can also manage public procurement in accordance with ethical AI standards and data interoperation standards. We are going to create a behavioural pattern faster before the changes in law materialize.
The planet is the operational framework of artificial intelligence – not just mankind. All in all, AI will influence the water and air supply. It can cause atmosphere cleaning but it can also bring about changes impossible to predict today.
Ethics are a sum of behaviours, socially grading relations of one human with another. The law specifies institutionally if something can be harmful or what the rules of responsibility of artificial intelligence are. It talks about certain applications which are forbidden, such as autonomous weapons systems, creating a human copy, or certain hybrids. These are the boundaries, guidelines: here, we do not progress any further.
The law is constructed in such a way today, that we have freedom of medical, biological and technological experimentation. There are restrictions in the spheres of pharmacology, medicine, the human body. But the technology itself is not limited. Certain legal solutions are needed to show these boundaries.
Let us suppose that our ethical AI is ‘’bought’’ by the States and all the Western technological giants move over to the light side. Still, there is China on the other side, based on a different model. What then?
I would like to dispel China a bit (PL). It is said China is so terrible. But we have got to be straight here: everyone plays their own game here, including Poland. It is not that we just have alliances. Our allies play particular sort of games with us. These games usually serve our interests, but sometimes they are difficult for us.
Russia came up with the so-called Parasol, namely the Russian Internet. Why? Some say it is about social control and censorship. No doubt about it whatsoever, but it is also about market protection, data flow and values.
European Union Member States started thinking about artificial intelligence being applied to the platform economy, in a model which is the most popular one today, will suddenly cause capital flight from our region. Countries with platform giants will benefit from this. This is why the European Union started to courageously focus on ethical AI (PL).
Great developments. This work is proceeding smoothly. The Union knows well that only the coordination of politics will decide on the economic success. It is about the production of solutions from the area of trustworthy AI on a mass scale.
I am obviously not so naive, to think that nobody will use the technology in bad faith. But having aims is a must for a man not to turn into a statistic in some economic terminology.
In ‘’The Big Nine’’ Amy Webb describes the development of American and Chinese technological industries and three scenarios which may come true in the future. Webb does not ascribe some demonic intentions to digital corporations. He believes that the system is getting out of control, living its own life, increasingly complicated and convoluted. Is he right?
I agree we do not have to talk about the demonic intentions of corporations. This is just a profit-maximization business. Social media were created as part of the corporate framework. It is a flat version of reality.
When you read terms and conditions of social media use, you are not going to find any values. In the government system of a country, transnational systems of various international organizations, one no longer talks about values. As a society, we have made large leaps before coming up with some rules on how to function. We have to talk about the Internet of values. Tim Berners-Lee [the creator of the World Wide Web – editors] says at an every conference that we need a return to normalcy, to the beginning of the Web.
The United States government has thrown down a challenge at DARPA to build artificial intelligence which will be able to explain itself. Maybe it’s not going to be so bad.
During the recent Cyber Academy conference (PL) you said that we must not accept this flat, commercial model of artificial intelligence. Should we as consumers say ‘’Stop’’ at a certain point?
We can say ‘’Stop’’ but how much will this be worth? No technology provider will listen to us. This is not just a matter of national politics and the national interest, but – what is possibly more important – the self-governance of man. Curtailing internet surveillance ought to be the common interest surpassing all division. This is why we talk about dignity. This is the threshold. A human being always needs some sort of private space. I do not want to think minimalistic, but let it be that way, at least.
This is why there is a need for responsible people who will stand in opposition to various surveillance solutions and say: ‘’This technology is about control. If you want to use it, you will have to show the user that you are controlling him here – and here.’’ If the user then agrees to being controlled, then never mind, there is no issue. As a community, we do not want to serve something like that. We want to support him, so he can say: ‘’OK, no more, I quit’’ or ‘’I’m aware of the risks involved.’’
If you were to point to the most significant points in OECD recommendations, as well as the EU recommendations regarding ethical AI, what would it be?
Trust is really important which goes hand in hand with transparency. If you are asking me about a tangible effect, then a huge gain of these two initiatives is the creation of a mechanism which allows us to check if we comply with all the ethical rules of designing, implementing and using artificial intelligence.
The fact we ask such a question will make people learn that it is something more than just a technology. We must place ourselves in the wider context.
Are there further plans to develop these ethical projects?
Yes. We must consider each plan, if we should not update it, and evaluate the results of these changes. OECD rules will translate to other rules concerning the job market or education sector advice. This is no longer just an economic organization – it now talks about society and the planet. This is how rapid the extent of the change is.
More work awaits the UN, G20, and other economic blocks, international blocks, where concrete solutions will always be expected. There is a place for China, Russia or the Cook Islands.
It is about embracing the chance this technology gives us. It is supposed to be a chance for us. Foreign, externally imposed, rules for artificial intelligence are not necessarily in line with our thinking. Economic independence, self-controllability of our economic conditions, sovereignty of man are really important to us, because we see not just social, but also economic and business agility of Poland. And we ought to ensure this is the case every step of the way. This is my policy recommendation for the Ministry of Digital Affairs. If we forget this, we will continue to shine with reflected light.
Robert Kroplewski is an experienced attorney-at-law, a specialist in the law of new technologies, electronic media, and social communication services. Since 2016, he holds the function of an advisor to the Minister of Digital Affairs. His area of expertise is information society. The co-author of Industry+ strategy (economy based on data, including data flows, cybernetic trust in global competition and the development of future technologies designed for the needs of AI). Since 2005, an expert of the Sobieski Institute in the area of new technologies. Since 2018, a member of European Commission’s High-Level Expert Group on Artificial Intelligence(AIHLEG). From November 2018, AI expert in AIGO group, designated by OECD.