My Lords, I add my appreciation of this timely and balanced report and welcome the chance to debate it here today. Machine learning, enabled by the ever-increasing number-crunching power of computers, is a potentially stupendous break- through. It allows machines to gain expertise, not just in game playing but in recognising faces, translating between languages, managing networks, and so forth, without being programmed in detail.
Moreover, AI is still at the baby stage compared to what its proponents expect in coming decades. Twenty years ago, few people envisioned the extent to which smartphones and IT have now changed the pattern of our lives, so it would be rash to predict how transformative AI could be in the next 20 years. Already, AI can cope with complex, fast-changing networks, such as traffic flows or electric grids. It could enable the Chinese to gather and process all the information needed to run an efficient planned economy that Marx could only have dreamed of. In science, its capability to explore zillions of options could allow it to find recipes for better drugs or for material that conducts electricity with zero resistance at ordinary temperatures.
But the implications for society, as we have heard, are already ambivalent. If there is a bug in the software of an AI system, it is currently not always possible to track it down. This is likely to create public concern if the system’s “decisions” have potentially grave consequences for individuals. If we are sentenced to a term in prison, recommended for surgery or even given a poor credit rating, we would expect the reasons to be accessible to us and contestable by us. If such decisions were entirely delegated to an algorithm, we would be entitled to feel uneasy, even if presented with compelling evidence that, on average, the machines make better decisions than the humans they have usurped.
Integration of databases by AI systems has an impact on everyday life and will become more intrusive and pervasive. Records of all our movements, our interactions with others, our health, and our financial transactions will be “in the cloud”, managed by a multinational quasi-monopoly. The data may be used for benign reasons—for instance, for medical research—but its availability to internet companies is already shifting the balance of power from Governments to the commercial sector.
There will also be other concerns—about privacy, for instance. Are you happy if a random stranger sitting near you in a restaurant or on a train can, via facial recognition, identify you and invade your privacy, or if fake videos of you become so convincing that visual evidence can no longer be trusted, or if a machine knows enough about you to compose emails that seem to come from you? The report rightly raises concerns about these matters.
A report published in February, prepared with input from my colleagues at Cambridge and Oxford, was entitled The Malicious Use of AI: Forecasting, Prevention and Mitigation. Its focus was on the near-term, and it highlighted three concerns: AI could allow existing types of cyberattack to be achieved with less effort, and therefore by more actors; by use of, for instance, co-ordinated drones, AI could facilitate physical attacks, and cyberattacks could occur on the software of driverless cars; and AI could allow more effective targeting of misinformation, denial of information, surveillance and so forth. Overall, the arms race between cyber- criminals and those trying to defend against them will become still more expensive and vexatious with the advent of AI.
The academic and commercial communities now speak with one voice in highlighting the need to promote “robust and beneficial” AI, but tensions are already emerging, as AI moves from the research and development phase to being a potentially massive money-spinner for global companies.
The committee’s report emphasises the incipient shifts in the nature of work—an issue addressed in several excellent books by economists and social scientists as well as by the noble Lord, Lord Hollick, and others today. Clearly, machines will take over much of the work of manufacturing and retail distribution. They can replace many white-collar jobs: routine legal work, such as conveyancing; accountancy; computer coding; medical diagnostics and even surgery. Many professionals will find their hard-earned skills in less demand. In contrast, some skilled service sector jobs—for instance, plumbing and gardening—will be among the hardest to automate.
The digital revolution generates enormous wealth for an elite group of innovators and for global companies, but preserving a healthy society will surely require redistribution of that wealth. There is talk of using it to provide a universal income. But it is surely better when all who are capable of doing so can perform socially useful work rather than receiving a handout. Indeed, to create a humane society, Governments should vastly enhance the number and status of those who care for the old, the young and the sick. There are currently far too few of these people, and they are poorly paid, inadequately esteemed, and insecure in their positions. It is true that robots can take over some aspects of routine care, but old people who can afford it want the attention of real human beings as well. Let us hope that we never get to a situation when we accept automata as substitutes for real teaching assistants reading stories to children with proper human empathy of the kind the noble Lord, Lord Reid, emphasised.
Not only the very young and the very old need human support: when so much business, including interaction with government, is done via the internet, we should worry about, for instance, a disabled person living alone, who needs to access websites online to claim their rightful government benefits or to order basic provisions. Think of the anxiety and frustration when something goes wrong. Such people will have peace of mind only when there are computer-savvy caregivers to help the bewildered cope with IT, to ensure that they can get help and are not disadvantaged. Otherwise, the digitally deprived will become the new underclass. Caring roles provide more dignified and worthwhile employment than the call centres or warehouses where jobs have been lost. Does the Minister think that it is possible to use the earnings of robots, as it were, to achieve Scandinavian-level welfare where the demand for carers is fully met?
Even if we have machines that can, effectively, interact with the real world, this will not be enough to ensure that they have human empathy. Computers learn from a “training set” of similar activities, where success is immediately “rewarded” and reinforced. Game-playing computers play millions of games; computers gain expertise in recognising faces by studying millions of images. But learning about human behaviour involves observing actual people in real homes or workplaces. The machine would feel sensorily deprived by the slowness of real life and would be bewildered. Only when this barrier can be surmounted—and perhaps it never will be—will AIs truly be perceived as intelligent beings, and if that happens, their far faster “thoughts” and reactions could then give them advantages over us.
Many experts think that the AI field, like synthetic biology, already needs guidelines for “responsible innovation”. Moreover, the fact that AlphaGo Zero achieved a goal that its creators thought would have taken several more years to reach has rendered DeepMind’s staff even more bullish about the speed of advancement. But others, like the roboticist Rodney Brooks—creator of the Baxter robot and the Roomba vacuum cleaner—argue that these projections will remain science fiction for a long time. Be that as it may, it is crucial to be aware of the potential of artificial intelligence, even though real stupidity will always be with us.