My Lords, it is hard speaking this far down the list, because I have made all these notes on my own notes and I am not sure I understand them any more, if I could understand them in the first place. Anyhow, like others, I begin by congratulating the noble Lord, Lord Clement-Jones, on his masterful chairmanship. I also thank our advisers. This was a terrific committee to be on, and I learned a lot from it.
DeepMind has been mentioned plenty of times already, but I am here to add a little more to its lustre. The impact of DeepMind has been truly global, but this is not fully appreciated in this country. The goal of DeepMind is, as it puts it, to “solve intelligence”, to deploy deep learning to mimic some of the basic capacities of the human brain. This is the difference between what AI was and what it is becoming. Deep learning is the prime motor of this transformation which, as other noble Lords have rightly said, will transform everything in our lives and is beginning to do so already.
In 2017, the computer program AlphaGo, which DeepMind established, beat the world champion and No. 1 player, Ke Jie, in Go: a much more complex game than chess. Go is not like a game, it is like a philosophy. It is 2,500 years old. It is so complex that ordinary players do not even know when it is finished, yet DeepMind triumphed in a range of matches over the world champion.
That is stupendous. As one Chinese observer put it, AlphaGo did not just defeat Ke Jie, it “systematically dismantled him”. What is not generally known in the West is the huge impact that this event made in east Asia. In China, the five matches were watched by a total of 280 million viewers—that is about four times the population of this country. They were not only watched but devoured, one might say. As one observer put it, China plunged into an “AI fever”. The impact of DeepMind, a little start-up in King’s Cross originally, has truly been geopolitical. It has been called China’s Sputnik moment, analogous to the events of 60 years ago that dented US pride.
As the noble Baroness, Lady Rock, mentioned, although I seem to have quite different figures, $22 billion will be invested directly in AI by the Chinese Government by 2020. They will try to do for AI what they have done for infrastructure. They have built a vast network of bullet trains in about 25 years, and here we are struggling with HS2. They will probably do the same in AI. Therefore, a global race for pre-eminence in AI is under way, not only between China and the US but with Russia and other major states involved. This will push it in a vertical manner.
As other noble Lords have mentioned, it is crucial to recognise that AI is not just about the future. It is best defined in terms of huge algorithmic power. The smartphone in your pocket or bag—although you have to say, in your hand, because if you go on the Underground, everyone is looking down; if you walk along the road, everyone is looking down—has more power than the computers that allowed the US to overcome its Sputnik moment and land on the moon 60 years ago.
The committee is right to conclude that the progress being made in deep learning is not progress towards general AI—AI that mimics or surpasses human intelligence. I think myself that there are good logical reasons why this will never happen. Rather, it will be the ubiquity of deep learning and its application to a variety of spheres of social and economic life that will reshape our lives.
Examples are here already. I will not mention too many of them, but a notable one is that a very high proportion of trading on world markets is done purely by algorithms, with no direct human intervention. They are dealing with billions of dollars—it is quite extraordinary. Similarly radical interventions can be traced elsewhere.
In this new global geopolitical race, the UK cannot hope to compete with China or the US on overall investment in AI. As our report makes clear, this country can nevertheless have a pioneering role and should look to advance this further. Active state intervention will be needed in a variety of domains. It is to the Government’s credit that they have recognised this and prompted the creation of a range of new agencies—the Alan Turing Institute, the AI council, the centre for data ethics and innovation and so forth—to which other noble Lords have drawn attention, but how far have these actually progressed?
We cannot remain static in this swirling world of transformation. We have to guess at possible futures and, at the same time, cope with issues raised by the profound transformations that have already occurred. As the noble Lord, Lord Clement-Jones, has said, the large digital corporations must be brought to heel and more effective control over the use of personal and private data returned to citizens. The huge questions that hang over the role of fake news in destabilising democracy must be urgently addressed. What is being done to co-ordinate a response to this? Have the Government in mind any intervention at national level? This is leading to a crisis of democracy in many countries that is all too visible.
Does the Minister agree that we must actively strive to promote, not just AI, but what some call IA? This relates to the point made by my noble friend Lord Browne about intelligence augmentation rather than artificial intelligence. In other words, we do not want to promote forms of activity and technology where human beings are simply designed out. Nowhere is the principle more crucial than in the design of autonomous weapons. Will the Minister update the House on the progress of DARPA—the Defense Advanced Research Projects Agency; a very nice name—in seeking to create a “glass box” form of autonomous weaponry, in other words one where human beings are kept in the loop? We are in real trouble if weapons escape our direct control. Large passenger planes are already mainly flown by computers and the algorithms embedded in them. Hence the airline joke: “What is the ideal cockpit crew? A pilot and a dog. The pilot is there to feed the dog and the dog is there to bite the pilot if he or she tries to touch anything”. This is not what we want the future of humanity to be.
As a coda, the world champion Ke Jie learned from his losses and became a much better player. He “fundamentally reconsidered” his game. DeepMind responded to this by saying that it was “honoured by his words”, and “also inspired by them”. It added that it must take,
“responsibility for the ethical and social impact of our work”.
As other noble Lords have indicated, we must hold it to this premise.