If enhancing humans proves successful, each of us will have to be enhanced. Otherwise, we will face widening social disparities, says professor Stuart Russell
Anna Zagórna: Elon Musk has recently presented to the public his Neuralink chip, which he intends to implant in human brains. Will connecting our minds to ultrafast computers be beneficial or dangerous?
Professor Stuart Russell*: Elon views the human-machine merger primarily as a defensive strategy to give us a chance of not losing outright in the race against superintelligent machines. But regardless of the technological feasibility of these ideas, one has to ask whether this direction represents the best possible future for the humanity. If humans need brain surgery merely to survive the threat posed by their own technology, perhaps we’ve made a mistake somewhere along the line.
And what if a superhuman is created? A person whose knowledge and skills come not only from their brain but also from the network?
That’s part of the problem. If we assume that neural enhancements actually work, we should also assume that it will become obligatory for everyone to have them. Otherwise they would be relegated to second-class persons. The same problem concerns genetic enhancements.
Shouldn’t we demand the right to brain privacy, thoughts, memories or the right to equal access to technologies?
Of course we should have privacy. I imagine that could be partially guaranteed, but there is certainly a risk of having one’s brain connected to the internet. People may find ways to infiltrate human brains and control them. It’s like connecting your own blood circulation into a global circulation blood pool. Perhaps it’s not a good idea…
That means that we will have to define some kind of future humans rights, superhuman rights.
The issue is more about human rights in the context of unprecedented risks and assaults. For example, we should have a fundamental and globally guaranteed right to know if we are communicating with a human or a machine. We should have the right to mental and physical security, including the right to live in a largely true information environment.
Should machines have their rights too? The World Intellectual Property Organization (WIPO) has recently launched a virtual exhibition, where it’s presenting, for example, how robots are created. WIPO wonders if such machines should have the right to intellectual property. But if we give them such rights, should they also have other rights?
The notion of rights for a machine is nonsense. However, it does serve to point out some issues about ownership and the need to sort out other legal issues in the contracts between designers/manufacturers of the machine and its users. Almost certainly, the user who buys the machine and uses it to invent things will be the owner of the inventions and not the designer of the machine. But of course the contract could give the designer some revenue share by advance agreement.
The interview was conducted during the CYBERSEC Global 2020 European Cybersecurity Forum conference held by the Cracow-based Kosciuszko Institute from 28 to 30 September. Professor Stuart Russell was one of the guests of the event.
*Professor Stuart Russell – an ICT specialist and a world-renowned expert in the field of artificial intelligence, a professor of computer science at the University of California, Berkeley, and a professor of neurological surgery at the University of California, San Francisco. He founded the Center for Human-Compatible Artificial Intelligence in Berkley. Along with Peter Norvig, he is the author of “Artificial Intelligence: A Modern Approach”, which has become the most popular AI textbook. His latest book, “Human Compatible: Artificial Intelligence and the Problem of Control”, focuses on artificial intelligence adapted to the needs of humans.
Przeczytaj polską wersję tego tekstu TUTAJ