Artificial Intelligence (Select Committee Report) - Motion to Take Note

– in the House of Lords at 3:09 pm on 19 November 2018.

Alert me about debates like this

Moved by Lord Clement-Jones

That this House takes note of the Report from the Select Committee on Artificial Intelligence AI in the UK: ready, willing and able? (HL Paper 100).

Photo of Lord Clement-Jones Lord Clement-Jones Liberal Democrat Lords Spokesperson (Digital)

My Lords, it was a pleasure and a privilege to chair the Select Committee on Artificial Intelligence. I thank members of the committee who engaged so closely with our subject matter over an intensive nine-month period and achieved such a high degree of unanimity. There were not just the formal sessions but a number of visits and workshops and even a neural network training session, ending with a fair few lively meetings deciding among ourselves what to make of it all.

Despite the limited life of the committee, we have not stopped taking about AI and its implications since, some of us in far-flung corners of the world. I regret that the noble Viscount, Lord Ridley, and the noble Lord, Lord Puttnam, having made such a major contribution to our work, are abroad for this debate.

I place on record a huge thanks to our team of clerks and advisers, without whom this report, which has been recognised as leading-edge nationally and internationally, could not have been written: our clerk, Luke Hussey; Dr Ben Taylor, our policy analyst; Hannah Murdoch, our committee assistant; and Dr Mateja Jamnik, our specialist adviser.

Our conclusions came after nine months of inquiry, consideration of some 225 written submissions of evidence and 22 sessions of fascinating oral testimony. I thank all our witnesses who gave such a great deal of time and commitment to the inquiry. I today thank the Minister who, with the right honourable Matt Hancock, gave extensive oral evidence. Since then, of course, Mr Hancock has been promoted twice. There is clearly a connection.

The context for our report was very much a media background of lurid forecasts of doom and destruction on the one hand and some rather blind optimism on the other. In our conclusions we were certainly not of the school of Elon Musk. On the other hand, we were not of the blind optimist camp. We are fully aware of the risks that the widespread use of AI could raise, but our evidence led us to believe that these risks are avoidable or can be mitigated to reduce their impact.

In considering this, we need to recognise that understanding the implications of AI here and now is important. AI is already with us in our smartphones and in our homes. Our task was,

“to consider the economic, ethical and social implications of advances in artificial intelligence”.

Our 74 recommendations were intended to be practical and to build upon much of the excellent work already being done in the UK, and revolved around a number of threads which run through the report.

The first is that the UK is an excellent place to develop AI and that people are willing to use the technology in their businesses and personal lives. There is no silver bullet, but we identified a range of sensible steps that will keep the UK on the front foot. They include making data more accessible to smaller businesses and asking the Government to establish a growth fund for SMEs through the British Business Bank to scale up their businesses domestically without having to worry about having to find investment from overseas or having prematurely to sell to a tech major. We said that the Government need to draw up a national policy framework, in lockstep with the industrial strategy, to ensure the co-ordination and successful delivery of AI policy in the UK.

A second thread relates to diversity and inclusion in education and skills, digital understanding, job opportunities, the design of AI and algorithms and the datasets used. In particular, the prejudices of the past must not be unwittingly built into automated systems. We said that the Government should incentivise the development of new approaches to the auditing of datasets used in AI and encourage greater diversity in the training and recruitment of AI specialists.

A third thread relates to equipping people for the future. AI will accelerate the digital disruption in the jobs market. Many jobs or tasks will be enhanced by AI, many will disappear and many new, as yet unknown, jobs will be created. AI will have significant implications for the ways in which society lives and works. Whatever the scale of the disruption, a significant government investment in skills and training is imperative if this disruption is to be navigated successfully and to the benefit of the working population and national productivity growth. Retraining will become a lifelong necessity and initiatives, such as the Government’s national retraining scheme, must become a vital part of our economy. We said that this will need to be developed in partnership with industry, and lessons must be learned from the apprenticeships scheme. At earlier stages of education, children need to be adequately prepared for working with, and using, AI. For a proportion, this will mean a thorough education in AI-related subjects, requiring adequate resourcing of the computing curriculum and support for teachers. For all children, the basic knowledge and understanding necessary to navigate an AI-driven world will be essential. In particular, we recommended that the ethical design and use of technology becomes an integral part of the curriculum. I should add that our evidence strongly suggested that the skills requirements of the future will be as much creative as scientific.

A fourth thread is that individuals need to be able to have greater personal control over their data and the way in which it is used. We need to get the balance right between maximising the insights that data can provide to improve services and ensuring that privacy is protected. This means using established concepts such as open data, ethics advisory boards and data protection legislation, and developing new frameworks and mechanisms, such as data portability, hubs of all things and data trusts.

AI has the potential to be truly disruptive to business and to the delivery of public services. For example, AI could completely transform our healthcare, both administratively and clinically, if NHS data is labelled, harnessed and curated in the right way. However, it must be done in a way that builds public confidence. Transparency in AI is needed. We recommended that industry, through the new AI council, should establish a voluntary mechanism to inform consumers when AI is being used to make significant or sensitive decisions.

Of particular importance to the committee was the need to avoid data monopolies, particularly by the tech majors. Large companies that have control over vast quantities of data must be prevented from becoming overly powerful within the AI landscape. In our report we called upon the Government, with the Competition and Markets Authority, to review proactively the use and potential monopolisation of data by big technology companies operating in the UK. It is vital that SMEs have access to datasets so that they are free to develop AI.

The fifth and unifying thread is that an ethical approach is fundamental to making the development and use of AI a success for the UK. A great deal of lip service is being paid to the ethical development of AI, but we said that the time had come for action and suggested five principles that could form the basis of a cross-sector AI code. They should be agreed and shared widely and work for everyone. Without this, an agreed ethical approach will never be given a chance to get off the ground. We did not suggest any new regulatory body for AI, taking the view that ensuring that ethical behaviour takes place should be the role of existing regulators, whether the FCA, the CMA, the ICO or Ofcom. We believe also that in the private sector there is a strong potential role for ethics advisory boards.

AI is not without its risks, as I have emphasised, and the adoption of the principles proposed by the committee will help to mitigate these. An ethical approach will ensure that the public trust this technology and see the benefits of using it. It will also prepare them to challenge its misuse. All this adds up to a package that we believed would ensure that the UK could remain competitive in this space while retaining public trust. In our report we asked whether the UK was ready, willing and able to take advantage of AI.

The big question is therefore whether the Government have accepted all our recommendations. I must tell your Lordships that it is a mixed scorecard. On the plus side, there is acceptance of the need to retain and develop public trust through an ethical approach, both nationally and internationally. A new chair has been appointed to the Centre for Data Ethics and Innovation and a consultation started on its role and objectives, including the exploration of governance arrangements for data trusts and access to public datasets, and the centre is now starting two studies on bias and microtargeting. Support for data portability is now being established. There is recognition by the CMA of competition issues around data monopoly. There is recognition of need for,

“multiple perspectives and insights ... during the development, deployment and operation of algorithms”— that is, recognition of the need for diversity in the AI workforce. And there is commitment to a national retraining scheme.

On the other side, the recent AI sector deal is a good start, but only a start towards a national policy framework. Greater ambition is needed. Will the new government Office for AI deliver this in co-ordination with the new council for AI? I welcome Tabitha Goldstaub’s appointment as chair, but when will it be up and running? Will the Centre for Data Ethics and Innovation have the resources it needs, and will it deliver a national ethical framework?

There was only qualified acceptance by the Department of Health of the need for transparency, particularly in healthcare applications. In the context of the recent DeepMind announcement that its Streams project is to be subsumed by Google and, moreover, that it is winding up its independent review panel, what implications does that have for the health service, especially in the light of previous issues over NHS data sharing?

The Department for Education was defensive on apprenticeships and skills shortages and appears to have limited understanding of the need for creative and critical thinking skills as well as computer skills.

The MoD in its response sought to rely on a definition of lethal autonomous weapons distinguishing between automated and autonomous weapons which no other country shares. This is deeply worrying, especially as it appears that we are developing autonomous drone weaponry. I would welcome comment by the Minister on all those points.

Some omens from the Government are good; others are less so. We accepted that AI policy is in its infancy in the UK and that the Government have made a good start in policy-making. Our report was intended to be helpful in developing that policy to ensure that it is comprehensive and co-ordinated between all its different component parts.

By the same token, I hope that the Government will accept the need for greater ambition and undertake to improve where their response has been inadequate. I beg to move.

Photo of Lord Holmes of Richmond Lord Holmes of Richmond Conservative 3:21, 19 November 2018

My Lords, it is a great pleasure to follow the noble Lord, Lord Clement-Jones, as it was to witness his excellent chairmanship of the AI Select Committee—not an easy task in such a complex area. There is nothing new in AI. In 1950, the Turing test was coined. In 1956, 12 professors from Dartmouth in the United States were sent off on their summer vacation to “solve the issue of artificial intelligence”. I am not sure whether they are still out there, but there is still plenty to be discussed in this area.

In the short time I have, I shall cover data, talent, clusters and, most importantly, public engagement. As with every element of the fourth industrial revolution—4IR—data is at its core. It is often called the new oil, but this dramatically undersells the quality of data—not least that it is pretty much infinite. Ninety per cent of all data currently out there was created in the past two years, to give noble Lords a sense of the exponential growth of data.

For the Government, there are huge questions about the data that they have, what form it is in and what should be done with it—not least NHS data, to which the noble Lord, Lord Clement-Jones, referred. Indeed, what is NHS data? Crucially, whose data is it? To echo a point that the noble Lord made, I ask the Minister to respond to the House on DeepMind’s recent announcement about moving its health business into Google.

For businesses, the questions are: what data do you have and what you want to do with it? AI offers such potential, but as with all the other elements of the fourth industrial revolution, it should never be something in search of a solution but more, the potential to solve some of the most intractable problems for business, government and individuals. As I have mentioned individuals, perhaps the most significant point to consider is that we may hold our smartphone in our hands, but it is the size of our data footprint that we should think most about.

Turning to talent, no matter how good the artificial intelligence is, ultimately it is people who will need to be prime throughout the fourth industrial revolution. Not least of these will be international people coming over to be part of building the AI revolution for which the United Kingdom has such a—perhaps unique—potential. However, our immigration system is described as “expensive”, “tedious” and “putting people off”. The Indian Prime Minister, Mr Modi, talking about international students, put it very well: “You want our trade; you do not want our children”. Does the Minister believe that the current immigration system is fit for purpose and optimised to enable us to make a real success of artificial intelligence and all elements of the fourth industrial revolution? Does he agree that it is high time that, as a start, international students were removed from the net migration statistics?

I turn to clusters. AI offers the potential, which has always been present in our society, to enable collaboration across sectors for stunning results. The industrialist, engineer, biologist and neurologist are coming together in fabulous clusters to drive the future. Noble Lords need only go to Pancras Square to see this in action. Yes, it has beautiful buildings, but what you really feel when you step into the square is one great big, beating brain. The golden triangle of Oxford, Cambridge and London offers potential to be the beating heart of AI development and deployment. What is the current situation with the upgrade to the varsity line, which would make such a difference? Infrastructure is key to the success of AI. We can develop algorithms that are as clever as we like, but if we do not have the physical infrastructure, with the fastest broadband, much of this will not achieve its full potential.

Public engagement is the real key. The massive success—or not—of AI will rest upon it. Do people feel part of the AI revolution? Have the public been engaged and do they understand that this is not for a few people in large organisations or government? Everybody has to understand and be positively part of this. If not, AI is likely to be the latest GM in our society. We have reason to believe that we can get this right when we look at the fabulous work on reproductive technology that Baroness Warnock did so many years ago. It was a stunning success because the public were engaged and there was an understanding of who would benefit and where any risks might lie. It will not be enough for a few people in the tech majors or government to believe that the public will just accept AI because they have decided that there are benefits, when there has been no explanation of where those may be felt and, crucially, where the risks may fall.

Shame on us if we do not make a success of AI and the fourth industrial revolution. Without being naive or Panglossian about it—I understand the risks—the possibility of solving some of our most intractable problems is immense, not least in health, mobility and social integration. This is not just about AI: if we get it right we can have a 4IR, fully fuelled, better Britain.

Photo of Lord Hollick Lord Hollick Labour 3:28, 19 November 2018

My Lords, I thank the chairman, the clerk, his team and our specialist adviser for helping the committee to navigate the vast landscape of AI. I will focus on three connected parts of that landscape: AI’s possible impact on industrial performance and productivity; its impact on the world of work; and its possible impact on the distribution of income. I emphasise “possible impact” because the widespread adoption of AI is at an early stage and, while there is no shortage of analysis and prediction, there is as yet no substantive body of evidence to guide us.

Over the last 30 years, the rapid deployment of computing and automation has revolutionised the way we live, learn and work. AI takes this a lot further. It can remember more, think faster and perform complex tasks which we took for granted to be the preserve of humans. AI, with these abilities, will bring about far-reaching changes right across the board.

The Government recognised AI’s revolutionary potential when they placed AI at the heart of their Industrial Strategy last December. That strategy and the subsequent publication of the sector deal in April made the bold claim that AI would potentially add 10% to our GDP by 2030, if adoption is widespread, and boost productivity by up to 30%. The development and deployment of AI are seen as a building-block in the creation of a significant new business sector with good export potential.

The UK’s investment in AI is a fraction of the amount invested by the US and China, both of which are planning significant increases in their investment over the next decade. But thanks to our strong research base and access to the best and brightest academics and entrepreneurs in the EU and globally, our AI sector ranks among the finest in the world. To maintain this position, the Government must commit to replace EU funding for research and development, where the UK currently receives a disproportionately high level of subsidy thanks to the strength of our AI sector.

Investment to provide fibre to all premises nationwide is critical. In February, only 3% of premises were connected by fibre compared with more than 50% in most of our competitor countries. When he was Secretary of State at DCMS, the much-travelled Matt Hancock waved away our concerns and told us that that the market would take care of it. We found his assurances unconvincing. What are the Government doing to prioritise funding for this essential infrastructure?

The ability to continue to attract the best and brightest and budding entrepreneurs is essential. The Government have made a good start by increasing the number of PhD places in, and doubling the cap on, tier 1 exceptional talent visas but, as they acknowledge, there is much more to do. The challenge of attracting and retaining talent after Brexit is highlighted in a recent survey of scientists by the Francis Crick Institute: 78% from the EU said they were now less likely to stay in the UK and a surprising 31% of the UK-born scientists said they were now more likely to move overseas. What further measures are the Government contemplating to improve access for overseas talent?

Maintaining and increasing the investment flowing into the development of AI could be boosted if the Government chose to use a fraction of the £45 billion annual procurement funding to partner with the AI sector to develop AI solutions for the public and private sectors alike. This approach would help to address the long-standing British problem of excelling at research but leaving the development of that research for others to exploit.

The UK’s thriving AI sector has proved a magnet for international investors. This is to be encouraged, but must be matched by a determination to ensure that inward investors do not game the tax system and that they abide by the developing rules on privacy and content, particularly the recognition that the protection of the integrity and ownership of data is paramount.

As the noble Lord, Lord Clement-Jones, mentioned, the acquisition by Google of DeepMind, one of the jewels in the crown of AI in the UK, brought welcome funding to develop DeepMind’s leadership position, but it meant that ultimate control now resides in the US. The reality of that control became clear last week, when Google absorbed DeepMind’s healthcare business, which has benefited from a controversial deal with the Royal Free Hospital to access 1.6 million patient records—a deal that the Information Commissioner ruled failed to comply with the Data Protection Act. When we visited DeepMind, we were told that an undertaking had been given by Google that the healthcare business would remain part of DeepMind and based in London. What are the Government’s concerns about the transfer of DeepMind’s healthcare business to Google in apparent contradiction to these undertakings?

Public administrative data, particularly healthcare data, is a valuable public resource and should be made available to commercial partners under strict conditions and on arm’s-length, market terms. Public bodies lack the skills to negotiate such arrangements, so the Information Commissioner’s Office should be resourced to oversee the terms and conditions of agreements to make sure that public information is made available to commercial partners on market terms. What are the Government’s plans to support public institutions to make sure that they secure the right terms and conditions?

In a similar vein, the Competition and Markets Authority should take a close interest in the sale of AI enterprises to foreign buyers. Their sale can undermine the Government’s strategy to foster a UK-based and controlled AI sector of scale and further deepen the unprecedented concentration of wealth and power in a small number of US-based digital oligopolists.

UK consumers are among the most enthusiastic adopters of new technology, but not so UK business. The low level of tech adoption by UK companies, large and small, is part of the story of our productivity gap. The Government, with the help of industry bodies and the AI council, should devise a series of measures, including fiscal incentives, to accelerate the take-up of technology across the board.

Productivity improvements usually spell job losses. The deployment of AI will lead to job losses, and the public are rightly anxious about their jobs, wages, security and prospects. Predictions of job losses range from 10% to nearly 40% of the current workforce. Many will be in the service sector. Predictions of off-setting new jobs to be created range from a net loss of nil to 30%. It is generally agreed that job losses will precede the arrival of new jobs, but it is not just the availability of a new job that concerns the public but the type of job and the pay and conditions that go with it. The experience of the impact of automation on the job market so far is that replacement jobs for unskilled or semi-skilled workers are less well paid, less secure, and lack the benefits enjoyed in their previous employment. AI now places at risk many of the jobs which replaced those lost in manufacturing.

Take call centres and large distribution centres, which are often sited in former manufacturing areas. Call centres employ a little under 1 million people. An industry expert told us that by 2020, 40% of those jobs, and by 2025, 70%, would be replaced by AI answering systems. Warehouses are increasingly fully automated and will employ only a few maintenance, caretaking and software people.

In a recent speech at the Royal Society, Professor Stiglitz examined the impact of the adoption of automation on income and wealth distribution and highlighted the increasing polarisation in the workforce between the skilled and the unskilled. Citing US figures, Stiglitz noted that the real wages of the unskilled and semi-skilled worker have declined over the last 35 years, with male workers experiencing a 42-year decline. He warned that, in the absence of a new policy framework, this trend will continue, but across a wider section of the workforce, as AI is deployed to carry out both routine and complex tasks.

The figures in the UK are not as bleak, but many low earners, especially the unskilled, have seen their real income decline or increase only minimally over the last 30 years. There is a widening gap between the high and low earners. Average real wages fell between 2007 and 2015, and have stagnated over the last three years. The same pattern can be seen in many developed countries. There is a correlation between low income and social mobility, which fosters a sense of disconnect, of being left behind—a sentiment that provides fertile ground for populist politicians.

We have heard much in the last week about bringing the country together. Perhaps bringing the Government together might be a good start. With the right policies, AI could usher in a period of prosperity, but without the right policies it could further polarise society and undermine social cohesion. A priority must be for the Government to make a major commitment to invest, as the chairman of our committee said, in lifetime training and skills to equip people to deal with the far-reaching and continuing changes that will flow from the introduction of AI.

The Government should consider the introduction of a lifelong learning account to replace what the Economic Affairs Committee’s recent report on student loans called the current “unfair and inefficient” funding of post-school education where further education, whose funding has been severely reduced, is the,

“poor relation to higher education”.

We need a new deal to help workers to train and retrain throughout their working lives. This will help to narrow the politically toxic gap between those with skills and those without. I look forward to hearing the Minister’s response to these issues; they have been identified in the report, and he has recognised that more needs to be done. It would be good to hear today what that “more” is.

Photo of Lord Kakkar Lord Kakkar Crossbench 3:40, 19 November 2018

My Lords, I join in thanking the noble Lord, Lord Clement-Jones, for the thoughtful way in which he has introduced this report. I also congratulate the noble Lord and his committee, as it is an excellent report. In so doing, I confine my remarks to Chapter 7, which deals with the potential impact of artificial intelligence on healthcare, and I declare my own interest as professor of surgery at University College London and chairman of UCLPartners.

This excellent report identifies that healthcare and its delivery are particularly sensitive to the tremendous opportunities that the application of artificial intelligence will provide. It also represents all the challenges that the adoption of artificial intelligence will present to society, government, individual professionals and the public more generally.

We have already seen the adoption of artificial intelligence to the application of clinical practice. Two of the most important applications have been in the area of diagnostics. The first regards the interpretation of retinal scans to help diagnose retinal pathology more rapidly. That application, developed at Moorfields Eye Hospital in conjunction with DeepMind, shows particular promise; it allows for broad application across large communities, reducing the time and resources necessary to make appropriate diagnosis of eye pathology and therefore providing the opportunity for earlier intervention and for interfering with the natural history of diseases in the eye to improve clinical outcome. Equally, there have been recent reports of the application of artificial intelligence to the interpretation of lung scans to help the earlier diagnosis of pathology in the lung, particularly pulmonary fibrosis; this is an important condition which, if identified early, allows the opportunity for earlier intervention and therefore, again, for improve clinical outcome.

However, these are rather simple applications. As we move forward in our broader development of the life sciences and biomedical sciences, so with reference to the opportunity for genomic medicine—the proper evaluation of the genome under individual disease conditions—combined with better characterisation of the phenotype, better monitoring and characterisation of clinical outcomes, and the combination of all those data, will provide tremendous opportunities for solution through artificial intelligence, deep learning and machine learning, which will transform clinical practice.

This transformation will first come in the area of early and more accurate diagnosis; it will soon be applied to the identification of new targets for the management of diseases, with new therapeutic targets for the development of potential new drug entities. This will be done more efficiently and more rapidly, and, of course, in such a way as to deliver on the promise of personalised medicine—precision medicine—through analysis of the characteristics of an individual disease and how that disease behaves, both in individual patients and among many individual patients. One can then predict how the natural history will progress and therefore how we should intervene more effectively.

All this promise is attended by a number of very serious challenges, as identified in this excellent report. How do Her Majesty’s Government propose to deal with seven particular challenges regarding the application of artificial intelligence in healthcare? Without clarity of purpose and of strategy in addressing these challenges, it will not be possible for our country, uniquely positioned as it is with the National Health Service, to bring the benefits of artificial intelligence and the attendant improvement in the delivery of healthcare and clinical outcomes to our fellow citizens.

The first of those benefits relates to data scientists—invaluable experts in a developing field that brings together mathematics, statistics and computational science. These individuals are at the heart of the development of the algorithms that inform artificial intelligence. How do Her Majesty’s Government propose to ensure that the National Health Service can compete in attracting these vital individuals with this particular skill set whom we do not currently have in sufficient numbers in the NHS so as to provide opportunities for artificial intelligence in healthcare?

Equally, a huge amount of data is generated on a daily basis through routine tests, investigations and the follow-up of patients in all healthcare environments. Those data, although vast in quantity, represent a meaningless resource unless they can be curated appropriately and their quality can be secured. They can then be brought to bear to provide opportunity in artificial intelligence application for the benefit of the individual patient. How do Her Majesty’s Government propose to ensure the curation of high-quality data across the widely varying range of institutions and environments where NHS care is delivered to ensure that the value of those data, both for the individual and for society more generally, can be secured?

In that regard, there will also be a need to train current and future healthcare professionals so that they will be able to take advantage of the opportunities that artificial intelligence as applied to healthcare will provide. What moves have Her Majesty’s Government made with regard to, for instance, Health Education England to ensure that curricula are now being developed to ensure both lifetime learning for current professionals and the development of future healthcare professionals so that they can take advantage of the opportunities that are provided? All this will of course require substantial funding. Her Majesty’s Government have committed substantially to increase the NHS budget between now and 2022, but what proportion of that additional funding will be applied specifically to data in the NHS and to the opportunity to adopt innovations associated with artificial intelligence at scale and pace across the entire health economy?

There are then questions relating to the adoption and application of artificial intelligence that attend to other areas, establishing both the social licence that will give the public and patients confidence in the state collecting and keeping secure very sensitive data—far beyond the data that we currently collect, moving to genetic information and beyond—and social licence regarding the sharing of those data, frequently with commercial third parties which have the expertise and experience to exploit them appropriately to provide opportunities to improve healthcare outcomes.

Ethical and legal questions will also need to be answered when clinicians start to rely increasingly on information generated as a result of artificial intelligence applications in making clinical decisions and driving forward patient care. How is that ethical framework to be delivered? How are legal questions around liability to be addressed when decisions are taken on the basis of AI applications for individual patients? Then there are important questions about how NHS institutions will be supported in negotiating access to data both for research and for the development of patient applications. Some institutions are well positioned to do that; others are not. How will Her Majesty’s Government ensure that all that is brought together so that the important opportunities provided by artificial intelligence application for the delivery of healthcare in the NHS can be taken for the benefit of all our fellow citizens?

Photo of The Bishop of Oxford The Bishop of Oxford Bishop 3:49, 19 November 2018

My Lords, it was a pleasure to serve as part of your Lordships’ Select Committee on Artificial Intelligence, and an education. I join others in paying tribute to the expertise and skill of our chair, the noble Lord, Lord Clement-Jones, and our excellent staff and advisers.

At the beginning of my engagement with AI, what kept me awake at night was the prospect of what AI might mean for the distant future: the advent of conscious machines, killer robots and artificial general intelligence. We are probably more than a generation away from those risks. But what kept me awake as the inquiry got under way—it really did—were the possibilities and risks of AI in the present. AI is already reshaping political life, employment, education, healthcare, the economy, entertainment, the professions and commerce. AI is now routinely used to drive microadvertising in political debate, referenda and elections across the world, reshaping political discourse and perceptions of truth. The disruption in the job market, described by the noble Lord, Lord Hollick, will fall disproportion- ately across the country. In my former diocese of Sheffield, as you drive across the Dearne Valley, you see clearly that the new industries in the former coalfield areas are warehousing and call centres, where there will be immense disruption in the next decade.

The use of this technology has already outstripped public awareness and debate by a considerable margin. My stock image for the use of artificial intelligence has shifted from the Terminator robot beloved of headline writers to the supermarket loyalty card in virtual form silently collecting data from most of our interactions with technology, which is collected, sold and reused, often in covert ways. The benefits of AI are significant. The risks are very real. They are both a present, not a future, reality. The dangers of a disruption of public trust impeding the benefits of technology are significant.

The experts from every sector from whom we took evidence were unmistakably clear on the need for a stronger ethical strand to the UK’s development and deployment of AI. In proposing our AI code the committee was responding to multiple requests from across the sector for a much stronger role for government and civil society in these debates—not necessarily to regulate but to benchmark and encourage good practice and give a stronger voice to citizens and consumers. Stephen Cave, director of the Leverhulme Centre in the University of Cambridge, said in response to our report at the launch:

“The tech entrepreneur mantra of the last decade was move fast and break things. But some things are too important to be broken: like democracy, or equality, or social cohesion”, and they are in danger.

Our report puts forward five overarching principles for an AI code which it would be good to see the Government affirm this afternoon. The first principle is that AI should be for the common good and benefit of humanity not the profit of the few. Let us see the power of AI directed to the great problems of the age for the common good. There should also be intelligibility and fairness in the deployment of AI. We always need to know when we are interacting with machines and the principles on which they make decisions. The protection of data rights and privacy are vital to human flourishing and mental health. We need the right to the best education for all to flourish in the machine age—the only antidote we discovered to the uneven and disruptive effects of AI in the workplace— along with the need to ensure that machines are not given the autonomous power to hurt, destroy or deceive human beings.

I fully support the Government’s aim to see the UK as a global leader in the ethics of artificial intelligence, as I do the steps which have already been taken, especially in establishing the Centre for Data Ethics and Innovation. But we need a vigorous public debate on what it means to be human in the age of artificial intelligence and a vigorous debate on what it means to live well with emerging technology. We need to amplify the voice of civil society and its influence in that debate. After the challenge of climate change, the question of how we live well with technology is one of the most urgent of the age. Can the Minister tell the House that the motto of Her Majesty’s Government for the future remains to move fast and mend things?

Photo of Lord Reid of Cardowan Lord Reid of Cardowan Labour 3:55, 19 November 2018

My Lords, I welcome this report and I want to make a few comments arising in particular from chapter 8 dealing with ethics and responsibility. The field of artificial intelligence sets out to create computer systems that perform tasks that would otherwise require human intelligence. That is the dictionary definition. They comprise a new generation of machines whose nature is entirely different from those we have been used to. In reaping the benefits of these new systems and ceding control, as our infrastructure comes to depend upon them, I believe that we need to mark a watershed in how we think about and treat software.

First, intelligence needs to be clearly understood as distinct from being intelligent or sentient. While AI entities may act in ways that emulate humans, their underlying logic remains a function of their architecture. They are in a very real sense “alien” beings whose actions result from motivations, stimuli and neural circuits that are entirely non-human. Secondly, historically, machines were built to operate deterministically; that is, to perform specific tasks within parameters set by their designers. In building AI we are creating systems whose functioning is largely opaque and whose outputs are non-deterministic; that is, what they do under all circumstances cannot be predicted with certainty. Thirdly, competitive motivations are driving the evolution of ever more sophisticated machine intelligence functions, with greater predictive value and more human-like interfaces that improve our perception of both intelligence and empathy. Devices that respond with human voices and virtual call centre operatives who converse like humans are now commonplace. The ability to appear human-like, to conduct sophisticated, responsive conversations, and even to recognise emotions, allow organisations to project human-like responsibility from what are actually software agents.

Despite human-like appearances and the ability to take actions that are functionally “correct”, they are not doing so out of concern or empathy, nor in the context of a moral, legal or ethical framework, and neither today can they be held legally responsible for their actions. Today in law we make a distinction that a human being may be responsible while a machine or an animal may not be. This creates an asymmetry because when something goes wrong, who takes responsibility for sorting out the problem? It becomes increasingly easy and desirable for every party in the value chain to absolve himself or herself of blame.

As humans, we have law-based obligations as part of our social contract within a civilised society, we have promise-based obligations as part of contracts that we form with others, and we have societal moral principles that are the core of what we regard as ethics, whether derived from rational reason or from religion. Responsible humans are aware of these ethical, moral and legal obligations. We feel empathy towards our fellows and responsibility for our children, employees and society. Those who do not do so are called sociopaths at best and psychopaths in the worst case. Ethics, morality, principles and values are not a function solely of intelligence; they are dynamic, context-dependent social constructs.

Moreover, bias and specification gaming are two important emergent properties of machine learning systems—the latter where they successfully solve a problem but do so via an unintended method, just as humans discover ways to cheat various systems. We must understand that no matter how intelligent a machine is, it may learn to act in ways that we consider biased, unethical or even criminal. For instance, we may anticipate autonomous vehicles evolving unintended bad behaviours resulting from the goals that they have been given. Equally, AI is no less vulnerable than humans to being spoofed or deceived by others, whether intentionally or unintentionally. I will not go into that matter today but it should be alarming when we come to AI-driven autonomous weaponry.

Even in the future, when machine intelligence may exceed human intelligence, we must distinguish between being better at carrying out a set of tasks and human responsibility. Intelligence is not the sole determinant of responsibility, even in human society; we talk about the “age of responsibility”, which distinguishes a minor from an adult and is based on the inability of children to make good decisions, being too immature to understand the consequences of, or consent to, certain behaviour. Representing sophisticated concepts such as “the public good” or “volunteering” in the goal-functions of machines is a far harder and more complex challenge than machine intelligence, yet it is equally important for their correct functioning.

However, the commercial value of displaying empathy means that AI entities will emulate emotion long before they are able to feel it. When a railway announcement says, “We are sorry to announce that your train is late”, the voice is not sorry; the corporation that employs and uses that voice is not sorry either. However, the company sees value in appeasing its customers by offering an apology and an automated announcement is the cheapest way of providing that apparent apology. If it is not capable of experiencing pain and suffering, can it be truly empathetic?

Furthermore, as a machine cannot be punished or incarcerated in any meaningful sense—although it might be rehabilitated through reprogramming—the notion of consequence of actions has little meaning to it. If a machine apologises, serves a prison sentence or is put in solitary confinement, has it been punished? The basis of responsibility built on an understanding of ethics and morality does not exist. It is certainly not the sole by-product of the intelligence level of the machine.

Finally, all those problems are compounded because the software industry today operates in a very different way to others that are critical to modern society, where the notion of audit exists. When we read the annual report of a PLC, it is possible to place some degree of trust in it because the chief financial officer, the accountant and the auditor take professional responsibility for the output. Similarly, an audit chain in the pharmaceutical industry enables regulators to oversee a large, complex and diverse industry. In construction, when a tragedy happens, we are able to trace the building materials used in construction. That process of audit encourages responsibility and the knowledge beforehand of the consequences of actions. But most software today is sold with an explicit disclaimer of fitness for purpose and it is virtually impossible to answer the questions: by whom, against what specification, why and when was this code generated, tested or deployed? In the event of a problem with software, who is responsible? The human owner? The company that supplied the software? The programmer? The chief executive of the company that supplied it? I would therefore argue that machine intelligence needs to be subordinate in responsibility to a human controller and therefore cannot in itself be legally responsible as an adult human, although it might in future have the legal status of a corporation or of a minor—that is, intelligent, but below the age of responsibility.

The GDPR was designed to ensure that passive “data” was linked to responsible human control. Ultimately, we might need a GDPR-equivalent for active machine learning systems to link their function to a human controller to ensure that organisations and individuals have protective and proportionate control processes in place. We might refer to the concept of that clear chain of responsibility, linking an audit of the specifications, code, testing and function to responsible individuals, as “trustable software”. Recent developments, including distributed ledger technology—blockchain to the uninitiated—would permit oversight to be implemented relatively easily.

In an age where software is at the heart of our infrastructure, where these systems are both non-deterministic and fully interconnected, AI needs a responsible human “parent”. Whoever that “parent” might be, it will require a “trustable” process to introduce auditability, accountability and, ultimately, responsibility.

Photo of Lord St John of Bletso Lord St John of Bletso Crossbench 4:06, 19 November 2018

My Lords, I join in thanking the noble Lord, Lord Clement-Jones, for his able chairmanship of the ad hoc Select Committee on AI in the United Kingdom. In my many years in your Lordships’ House I have never been on a Select Committee that has been so absorbing and stimulating.

We are living in the most extraordinary times. The confluence of big data, connectivity and artificial intelligence has revolutionised old industries and created new ones. As the Industrial Revolution transformed the nature of manual work, so AI is set to change dramatically the nature of white-collar workers and the service industry, from chatbots replacing call centres, to those who make decisions on credit and even accountants—and, with the emergence of autonomous cars, truck drivers—being replaced. This confluence of change means that AI has reached a flashover point, with computer power, the availability of huge volumes of data and the fact that digital channels for interacting with businesses and citizens are now preferable.

Apart from the oral and written evidence, we were fortunate to visit Google DeepMind, the Microsoft research laboratories in Cambridge and the Alan Turing Institute, as well as techUK. My noble friend Lord Kakkar spoke most eloquently about chapter 7 of our report. This details the huge benefits that AI can deliver to healthcare, particularly in the National Health Service. This could include more effective detection of diseases, management of chronic conditions, drug discovery and, of course, delivery of more efficient health services. We are increasingly moving from a world of reactive medicine to one of proactive medicine.

However, one of the potential drawbacks in the National Health Service is the fact that that there is no centralised database, resulting in most data being kept in unrelated silos. While keeping data isolated made sense historically as a security measure, the data-driven world in which we operate demands greater visibility and consolidation. Machine learning provides unique value in being able effectively to remove the manual processing of data, thus significantly reducing back-end operating expenses.

Clearly, the management of data in the NHS requires strict adherence to data privacy—there are concerns about the criminal misuse of AI and data—and, as the noble Lord, Lord Reid, mentioned, respect needs to be given to data ethics and accountability. Concerns have been raised about the risk of abuse of AI and breaches not just of public trust but of data security.

Many fear that the merger of infotech and biotech could push many hundreds of thousands of people out of the job market, a point made by the noble Lord, Lord Hollick. Yes, there will be masses of job losses but, equally, replacement jobs will be created.

As the noble Lord, Lord Clement-Jones, said, there are concerns also that big data could create digital dictatorships. Our report covered the need for reasonable access to data, particularly for SMEs. While several are sceptical about the effective use of AI in the UK, our report sought to focus on the positive contribution that it can make to our lives in many different ways. One industry not mentioned in the report but to which the noble Baroness, Lady Rock, may refer is the agricultural sector, where AI can have a huge impact. We now have precision agriculture, where farmers are able to utilise drones in the sky, connected with the help of big data to sensors in the fields.

As the noble Lord, Lord Hollick, said, it was encouraging that the Government mentioned in their industrial strategy that AI and data capture need to be identified as one of the four grand challenges in which the UK can lead the world for years to come. To maximise this opportunity, we need more qualified data scientists who are able to use algorithms to sort through enormous databases, combining profiles, eliminating duplicates and thereby providing a complete and unified set of data. What plans do the Department for Education and other departments have to provide students with training to more effectively prepare them to embrace data scientific skills?

Trustworthiness underpins public support for data innovation. I have already referred to the benefits in healthcare and agriculture, but there are huge benefit also in the financial services sector and autonomous systems. Lessons have been learned from the fiasco of data breaches at Cambridge Analytica.

I want to make brief mention of the opportunities of blockchain technology, which is not just about cryptocurrencies but is more a transformational tool and game-changer for the future. Distributed ledgers can be created which will form a significant part of future databases, providing greater transparency and accountability to both the public and private sector.

There is no doubt that AI presents a significant opportunity to solve complex problems, boost productivity and accelerate innovation. We need to shift legacy mindsets to embrace new ideas. AI and machine learning need to be embraced while respecting privacy, ethics, transparency and good governance. I wholeheartedly embrace all the recommendations of our report and trust that the United Kingdom can consolidate and thrive as a global leader in the development of artificial intelligence.

Photo of Baroness Rock Baroness Rock Conservative 4:14, 19 November 2018

My Lords, I too pay tribute to the noble Lord, Lord Clement-Jones, as an outstanding Chairman of our Select Committee, and I thank all noble Lords on the committee for their warm collaboration and insights on this important report. We could not have completed the report without the hard work of the committee staff, advisers and clerks, for which many thanks. It is a great pleasure to speak in this debate today, especially since I believe we have built up significant momentum on this important subject. We must develop our AI capability: it will underpin our future competitiveness. China has set a goal of $150 billion investment in AI by 2030. I am not suggesting that we emulate that—not least because it is not just about money—but it is strong evidence that AI is a key component of the global economy of the future.

I shall focus today on how we must foster evolving technologies, but before I do, I will begin with ethics, as other noble Lords have done and as I think this whole debate must. Many speakers today also contributed to the recent debate in this House on the NHS and healthcare data, which focused on how NHS data might be used to ultimately benefit patients and the service as a whole for the public good, but without jeopardising confidentiality. In their response to our committee the Government indicated that there were lessons to be learned from the DeepMind/Royal Free case.

To address these and other issues, the committee welcomed the creation of a national Centre for Data Ethics and Innovation. Since our report, we have had a consultation on the centre’s remit, which sets out three functions: to analyse and anticipate risks and opportunities; to agree and articulate best practice; and to advise on the need for action. The first is important as it links the centre to growth and development. The last two will be critical not only for working through the thorny questions of ethics and data, but also, importantly, for communicating what needs to be done and calling on the Government to act where required. The centre can build public confidence and trust in how new technologies are being deployed and how their data is being used to support future innovation.

So to growth and innovation. Priorities for entrepreneurs and start-ups developing AI are the same. What do they need? People and money: the right skills, talent and investment. Beginning with investment, there is undoubtedly cause for optimism. While the Government did not accept the committee’s recommendation to ring-fence a proportion of the British Business Bank’s £2.5 billion investment fund for AI-focused businesses, there is still a significant pool of capital from which funds can be raised; and, given the growth potential, we can be reasonably sure that the investment teams at the BBB will give serious consideration to AI businesses.

Furthermore, commercialising our world-class university intellectual property in this field is a great opportunity to enable rapid growth. Our committee asked universities to set out clearly their principles for handling IP, licensing and supporting spin-outs. The Government highlighted in their response the role of the Alan Turing Institute in looking specifically at commercialisation opportunities, and Research England has developed benchmarking for how well universities commercialise research. I also highlight the strong role the Government are playing in nurturing growth. The industrial strategy of course gives great prominence to AI, but we now have the AI sector deal as well, which establishes the AI council, which will blend academia and industry to ensure that we are fully exploiting the opportunity, attracting AI entrepreneurs to the UK, as well as supporting exports in this space. Lastly, in this as in many other respects, government as buyer can really move the needle. The GovTech Catalyst fund will provide £20 million for businesses using AI to help solve public sector challenges.

Turning to skills, I commend the appointment of Professor Dame Wendy Hall as the first Skills Champion for AI. No doubt she will ensure continued momentum. The commitments in the AI sector deal are significant but we need to make sure that they are delivered. In particular, where measures are sector-agnostic, we need to ensure that they are promoted to AI specialists; for example, as the noble Lord, Lord Hollick, mentioned, the doubling of tier 1 exceptional talent visas from 1,000 to 2,000 a year—leaving aside whether this should be higher still. Some of the skills content in the sector deal is admirably specific, in particular the proposed Turing Fellowship to help attract and retain the best research talent in AI, and adding a further 200 doctoral studentships in AI by 2020.

Finally, I will say a brief word on the potential impact on the labour force. After all, in our rush to build capacity and capability, we must ensure that growth in AI is as inclusive as possible. I welcome what is set out in the sector deal on training and new degrees.

Overall, we must remain optimistic about the benefits of AI to our economy and our society—the boost to our productivity and the exciting applications in everything from agriculture to healthcare. We have a robust framework for growth now in place in both capital and skills—the two prime determinants in scaling up businesses. To bring all this together, we need leadership from the highest level. Once the Brexit smog clears, we must ask what kind of economy we are trying to build. I suggest that we are gathered here today because we believe we can succeed if we incorporate AI and other technologies into our economy. Our committee concluded that the UK is in a strong position to be among the world leaders in the development of AI during the 21st century. We must continue to be its champions.

Photo of Lord Browne of Ladyton Lord Browne of Ladyton Labour 4:22, 19 November 2018

My Lords, as I intend to restrict my remarks to the part of the report that deals with autonomous weapons, I draw attention to my entry in the register of interests, particularly my consultancy with the Nuclear Threat Initiative and its nascent collaboration with the Centre for the Study of Existential Risk at Cambridge University, which of course is the brainchild of the noble Lord, Lord Rees of Ludlow, whom I see in his place. I look forward to his contribution.

I add my congratulations and appreciation to the noble Lord, Lord Clement-Jones, and his committee on this wide-ranging report into the use and development of artificial intelligence. I agreed with many of its recommendations—at least those that I fully understood—particularly for the Government to get a grip on algorithmic bias. The committee’s identification of the probable dangers of a small number of companies owning the lion’s share of the world’s data more than supports its recommendation that the Government stop these companies monopolising control of our data which is in their possession.

I agree also with the report placing a strong emphasis on the UK’s role as an ethical leader in the AI world. This is especially important in the military use of AI. Paragraphs 334 to 346 of the report deal with autonomous weapons, and open with the following sentence:

“Perhaps the most emotive and high-stakes area of AI development today is its use for military purposes”.

I wholeheartedly agree. The report concedes, unfortunately, that the committee had insufficient time or capacity to deal with this, and goes on to record the view that this area merits a “full inquiry” on its own. I fully agree and argue that the sooner your Lordships’ House has this inquiry, the better. We should move directly from this debate to the instruction of that important inquiry.

I strongly agree with the report’s recommendation that,

“the UK’s definition of autonomous weapons should be realigned to be the same, or similar, as that used by the rest of the world”.

In particular, I agree that the present UK definition, which was explained so simply by the noble Lord, Lord Clement-Jones, is problematic. It depends on there being no difference between “automatic” and “autonomous”; it limits the UK’s participation in the international debate, because it speaks a different language; it restricts our ability to show moral and ethical leadership; and it blocks the possibility that the current international process that is considering how to control these weapons systems will reach an agreed definition, which is after all its primary purpose.

Since 2016, in an attempt to find a suitable multilateral vehicle to regulate lethal autonomous weapons, the signatory states to the Convention on Certain Conventional Weapons—a treaty signed in 1980 with the purpose of eliminating weapons deemed excessively cruel or injurious—sought to assess the potential dangers posed and consider whether new measures were needed to control LAWs, as they are often referred to. Early in their deliberations, the high-contracting parties subcontracted this task to a group of governmental experts, who have become known as GGE. The group most recently met in Geneva in August and the draft report from its 2018 deliberations reveals that it was defeated by the challenge of finding an agreed definition of autonomous weapons, meaning that its concluding recommendations are—as they were the year before, and the year before that—that it should meet again next year. This, despite the fact that most experts believe that the unregulated deployment of LAWs could lead to violations of the law of war and international humanitarian law, while increasing the risk of uncontrolled escalation should there be a major-power crisis.

Almost every delegate to the GGE meetings argued that humans should always retain ultimate control over weapons systems but the group still failed to agree anything other than that it should continue further expert-level discussion next year, which will be the fourth year of discussion. In my view it has had ample time to investigate the dangers posed by autonomous weapons and, although important technical issues about definition remain, the time for discussion is over. It is beyond disappointment that, in response to the Select Committee’s recommendation, the Government yet again explained that their policy is to await the outcome of this expert discussion, in the meantime sticking with their “problematic” definition. I suggest to the Government that this expert discussion will never end. There is no sign of it ending at the moment.

We have in this debate an opportunity to ask the Government to think again. It is timeous, because the high-contracting parties to the CCW are themselves meeting later this week in Geneva to discuss the recommendations of the GGE. It is now clear that the only way this process will progress is if the high-contracting parties take back control and initiate proper negotiations on autonomous weapons, with the aim of crafting an outcome ensuring continued human control over weapons of war and the decision to employ lethal force.

Last week at the Centre for the Study of Existential Risk, I met experts who are working together on this challenge. They agree that the development of LAWs poses serious risk to international security and could spark arms races or lower the threshold for the use of force. They are concerned about how to prevent their deployment, once developed, in urban or other settings where mistakes and their consequences are likely to be very costly. In particular, I am impressed by the views of Dr Shahar Avin, one of the three researchers from CSER who will attend the meeting in Geneva this week. He agrees with the growing consensus that the UN’s negotiations have made little progress, that the discussions are slowed by disagreements about definitions and that there has been little constructive engagement, let alone vision and leadership, from major military powers. For a variety of reasons the United States—and consequently Russia and China—is unlikely to provide that leadership. As Dr Avin says:

“In January, the Prime Minster said she wanted the UK to lead the world in the ‘safe, ethical and innovative deployment of artificial intelligence’. Some of the world's leading Al researchers and organisations, who are based in the UK, have joined international calls to prevent the development of LAWs.

This makes the United Kingdom the leading candidate to provide leadership in the LAWs negotiations, as part of a wider vision for a responsible, safe and ethical future for artificial intelligence. Instead of taking a backseat while holding on to a problematic definition, the UK could furnish its CCW delegates with a national vision generated through a multi-stakeholder conversation, and be seen globally as a leader”— or in partnership with France and Germany, which are already taking the lead—

“in how to respond to emerging technologies”.

I am conscious that this approach, although similar, is different from the second recommendation of the Select Committee—the formation of a panel of military and AI experts, if I remember correctly, to meet within eight months to agree a revised definition. I strongly believe that these matters should not be restricted to the military and the experts. The whole of society has a stake in this, and there should be a broad and ongoing UK conversation. In particular, legislators—Members of both Houses of Parliament, who have been largely silent on these issues—should be involved in this conversation. I thank the Select Committee for creating an opening for the beginning of just such a multi-stakeholder conversation, and I challenge the Minister to go back to his colleagues and ask them to begin it soon.

Photo of Baroness Kidron Baroness Kidron Crossbench 4:31, 19 November 2018

My Lords, I warmly congratulate the noble Lord, Lord Clement-Jones, and the committee on both the tone of the report and the excellent set of recommendations. While leaving the broader questions to members of the committee, I will offer four brief thoughts that might find their place in the wider discussion today. In doing so, I refer the House to my interests in the register.

The first is about the ownership of vast datasets which, as the report says, are,

“fuelling the current AI boom”.

While we hold some rights over the initial transfer of our data, the more processes it is subjected to, the less control or even knowledge of its impact we have. On a recent trip to Silicon Valley, an engineer put it to me this way: “You may be the author of your data, but we own all that it implies about you and all it implies about others”. The data, the inferences and the knowledge it offers are closely guarded by a handful of Silicon Valley behemoths because it is so very valuable. It allows them to determine the choices and opportunities we are given or denied as individuals and communities, and as a society more broadly.

In the changing landscape of movie production, user behaviour, including the exact moment the viewer stopped watching, their age, socioeconomic group, familial relationships and, in some instances, even their shopping habits, last known holiday or current mood, is increasingly known. This data is used to make production decisions. My colleagues in the film business are increasingly anxious that the elements of production over which they have agency are diminishing, including the very stories that can be made.

This may be an area in which we do not traditionally worry about the control of AI over decision-making, but the stories we tell are an expression of our culture, our time and even occasionally our resistance. If the stories we are able to tell are determined by machine-learned patterns that reinforce what has gone before, is not the end game the miserable prospect that the story of our time will be a reflection of the lowest common denominator of what the AI says we like?

This example of the commercial control of data may be very specific, but I could easily have talked about Google’s monopoly over search ranking, Apple and Android’s gatekeeping role in relation to app stores or Facebook’s ability to make or break the spread of political advertising, so perhaps the Minister will say whether he believes that laws governing competition and market dominance are fit for a business model in which data is the currency?

My second point is that behind this wall of data control is information that it is in the public interest for us to have. For example, it is an ongoing struggle to get tech companies to share data about reporting and complaints they receive from children, particularly those children who do not meet the age restrictions of the services they are using.

The Government’s Internet Safety Strategy has proposed a draft transparency report and, in doing so, prompted both Google and Facebook into some pre-emptive reporting. But neither the government proposal nor the reports already proffered gives the sort of access to data needed to understand how children of different ages react to harms, which drivers of online experience create harm, which have the biggest impact on children’s mental health and well-being, whether platforms are effectively triaging complaints, and what solutions, both AI and human, are most effective in reducing the impacts of different sorts of harm. In short, access to far wider data is essential to gather the insights that would promote better outcomes or defences from harm. The ability to tackle harms at scale is hampered by this lack of access to commercial datasets. So I ask the Minister whether it is time to mandate licensed research access to privately held datasets where there is an overwhelming case of public interest.

That brings me to the question of considering children more broadly in design of service. In my work I speak to many engineers who design and build AI, almost all of whom admit that, until they met me, they had never considered the needs or consequences for children of the services they design. Many challenges faced by users online are commercially driven, intentional design choices. Such choices require universal standards and regulatory intervention, but others are due to a level of societal blindness on the part of those who create the systems. So, in addition to my strong support for all the recommendations relating to the education of children in schools, I impress upon the Minister the urgent need for professional qualifications in computer science and associated disciplines to have mandatory modules that cover rights by design—including safety by design, privacy by design and ethics by design—impact assessments and precautionary principles in the design of all AI for all users, but particularly children. Engineers are the drivers of tech culture, and an intervention in their training is a cheap and impactful way of tackling those aspects of AI design that are unconscious and unintended.

Finally, the committee’s report concludes that introducing AI-specific regulation would be less effective than ensuring that existing sector-specific regulation applies to AI decisions. I welcome this approach, but we need greater clarity on how existing protections apply to the digital environment. Just as the noble Baroness, Lady Buscombe, confirmed to the noble Lord, Lord Stevenson, that the Health and Safety at Work Act 1974 applies to AI, will the Minister confirm that the Equality Act 2010 and the Consumer Rights Act 2015 similarly apply? In a recent debate I floated the idea of an overarching harmonisation Bill that would operate in a similar way to Section 3 of the Human Rights Act by creating an obligation to interpret legislation in a way that creates parity of protection and redress online and offline to the extent that it is possible to do so. I did not get an answer in that debate, and I wonder whether I might be luckier today.

These are just four aspects of a broader need to hold those who design and control the digital environment to the same standards to which we hold the institutions and commercial players in other environments. This is not so much a brave new world as one that has yet to catch up and be subject to the principles inherent in parity, accountability, design standards and enforcement. Each of these offers an approach to the digital environment that would ensure that it meets the rights and needs of its users. I hope that the Minister will feel able to fully answer the points that I have raised. I welcome this excellent report.

Photo of Lord Giddens Lord Giddens Labour 4:39, 19 November 2018

My Lords, it is hard speaking this far down the list, because I have made all these notes on my own notes and I am not sure I understand them any more, if I could understand them in the first place. Anyhow, like others, I begin by congratulating the noble Lord, Lord Clement-Jones, on his masterful chairmanship. I also thank our advisers. This was a terrific committee to be on, and I learned a lot from it.

DeepMind has been mentioned plenty of times already, but I am here to add a little more to its lustre. The impact of DeepMind has been truly global, but this is not fully appreciated in this country. The goal of DeepMind is, as it puts it, to “solve intelligence”, to deploy deep learning to mimic some of the basic capacities of the human brain. This is the difference between what AI was and what it is becoming. Deep learning is the prime motor of this transformation which, as other noble Lords have rightly said, will transform everything in our lives and is beginning to do so already.

In 2017, the computer program AlphaGo, which DeepMind established, beat the world champion and No. 1 player, Ke Jie, in Go: a much more complex game than chess. Go is not like a game, it is like a philosophy. It is 2,500 years old. It is so complex that ordinary players do not even know when it is finished, yet DeepMind triumphed in a range of matches over the world champion.

That is stupendous. As one Chinese observer put it, AlphaGo did not just defeat Ke Jie, it “systematically dismantled him”. What is not generally known in the West is the huge impact that this event made in east Asia. In China, the five matches were watched by a total of 280 million viewers—that is about four times the population of this country. They were not only watched but devoured, one might say. As one observer put it, China plunged into an “AI fever”. The impact of DeepMind, a little start-up in King’s Cross originally, has truly been geopolitical. It has been called China’s Sputnik moment, analogous to the events of 60 years ago that dented US pride.

As the noble Baroness, Lady Rock, mentioned, although I seem to have quite different figures, $22 billion will be invested directly in AI by the Chinese Government by 2020. They will try to do for AI what they have done for infrastructure. They have built a vast network of bullet trains in about 25 years, and here we are struggling with HS2. They will probably do the same in AI. Therefore, a global race for pre-eminence in AI is under way, not only between China and the US but with Russia and other major states involved. This will push it in a vertical manner.

As other noble Lords have mentioned, it is crucial to recognise that AI is not just about the future. It is best defined in terms of huge algorithmic power. The smartphone in your pocket or bag—although you have to say, in your hand, because if you go on the Underground, everyone is looking down; if you walk along the road, everyone is looking down—has more power than the computers that allowed the US to overcome its Sputnik moment and land on the moon 60 years ago.

The committee is right to conclude that the progress being made in deep learning is not progress towards general AI—AI that mimics or surpasses human intelligence. I think myself that there are good logical reasons why this will never happen. Rather, it will be the ubiquity of deep learning and its application to a variety of spheres of social and economic life that will reshape our lives.

Examples are here already. I will not mention too many of them, but a notable one is that a very high proportion of trading on world markets is done purely by algorithms, with no direct human intervention. They are dealing with billions of dollars—it is quite extraordinary. Similarly radical interventions can be traced elsewhere.

In this new global geopolitical race, the UK cannot hope to compete with China or the US on overall investment in AI. As our report makes clear, this country can nevertheless have a pioneering role and should look to advance this further. Active state intervention will be needed in a variety of domains. It is to the Government’s credit that they have recognised this and prompted the creation of a range of new agencies—the Alan Turing Institute, the AI council, the centre for data ethics and innovation and so forth—to which other noble Lords have drawn attention, but how far have these actually progressed?

We cannot remain static in this swirling world of transformation. We have to guess at possible futures and, at the same time, cope with issues raised by the profound transformations that have already occurred. As the noble Lord, Lord Clement-Jones, has said, the large digital corporations must be brought to heel and more effective control over the use of personal and private data returned to citizens. The huge questions that hang over the role of fake news in destabilising democracy must be urgently addressed. What is being done to co-ordinate a response to this? Have the Government in mind any intervention at national level? This is leading to a crisis of democracy in many countries that is all too visible.

Does the Minister agree that we must actively strive to promote, not just AI, but what some call IA? This relates to the point made by my noble friend Lord Browne about intelligence augmentation rather than artificial intelligence. In other words, we do not want to promote forms of activity and technology where human beings are simply designed out. Nowhere is the principle more crucial than in the design of autonomous weapons. Will the Minister update the House on the progress of DARPA—the Defense Advanced Research Projects Agency; a very nice name—in seeking to create a “glass box” form of autonomous weaponry, in other words one where human beings are kept in the loop? We are in real trouble if weapons escape our direct control. Large passenger planes are already mainly flown by computers and the algorithms embedded in them. Hence the airline joke: “What is the ideal cockpit crew? A pilot and a dog. The pilot is there to feed the dog and the dog is there to bite the pilot if he or she tries to touch anything”. This is not what we want the future of humanity to be.

As a coda, the world champion Ke Jie learned from his losses and became a much better player. He “fundamentally reconsidered” his game. DeepMind responded to this by saying that it was “honoured by his words”, and “also inspired by them”. It added that it must take,

“responsibility for the ethical and social impact of our work”.

As other noble Lords have indicated, we must hold it to this premise.

Photo of Lord Rees of Ludlow Lord Rees of Ludlow Crossbench 4:49, 19 November 2018

My Lords, I add my appreciation of this timely and balanced report and welcome the chance to debate it here today. Machine learning, enabled by the ever-increasing number-crunching power of computers, is a potentially stupendous break- through. It allows machines to gain expertise, not just in game playing but in recognising faces, translating between languages, managing networks, and so forth, without being programmed in detail.

Moreover, AI is still at the baby stage compared to what its proponents expect in coming decades. Twenty years ago, few people envisioned the extent to which smartphones and IT have now changed the pattern of our lives, so it would be rash to predict how transformative AI could be in the next 20 years. Already, AI can cope with complex, fast-changing networks, such as traffic flows or electric grids. It could enable the Chinese to gather and process all the information needed to run an efficient planned economy that Marx could only have dreamed of. In science, its capability to explore zillions of options could allow it to find recipes for better drugs or for material that conducts electricity with zero resistance at ordinary temperatures.

But the implications for society, as we have heard, are already ambivalent. If there is a bug in the software of an AI system, it is currently not always possible to track it down. This is likely to create public concern if the system’s “decisions” have potentially grave consequences for individuals. If we are sentenced to a term in prison, recommended for surgery or even given a poor credit rating, we would expect the reasons to be accessible to us and contestable by us. If such decisions were entirely delegated to an algorithm, we would be entitled to feel uneasy, even if presented with compelling evidence that, on average, the machines make better decisions than the humans they have usurped.

Integration of databases by AI systems has an impact on everyday life and will become more intrusive and pervasive. Records of all our movements, our interactions with others, our health, and our financial transactions will be “in the cloud”, managed by a multinational quasi-monopoly. The data may be used for benign reasons—for instance, for medical research—but its availability to internet companies is already shifting the balance of power from Governments to the commercial sector.

There will also be other concerns—about privacy, for instance. Are you happy if a random stranger sitting near you in a restaurant or on a train can, via facial recognition, identify you and invade your privacy, or if fake videos of you become so convincing that visual evidence can no longer be trusted, or if a machine knows enough about you to compose emails that seem to come from you? The report rightly raises concerns about these matters.

A report published in February, prepared with input from my colleagues at Cambridge and Oxford, was entitled The Malicious Use of AI: Forecasting, Prevention and Mitigation. Its focus was on the near-term, and it highlighted three concerns: AI could allow existing types of cyberattack to be achieved with less effort, and therefore by more actors; by use of, for instance, co-ordinated drones, AI could facilitate physical attacks, and cyberattacks could occur on the software of driverless cars; and AI could allow more effective targeting of misinformation, denial of information, surveillance and so forth. Overall, the arms race between cyber- criminals and those trying to defend against them will become still more expensive and vexatious with the advent of AI.

The academic and commercial communities now speak with one voice in highlighting the need to promote “robust and beneficial” AI, but tensions are already emerging, as AI moves from the research and development phase to being a potentially massive money-spinner for global companies.

The committee’s report emphasises the incipient shifts in the nature of work—an issue addressed in several excellent books by economists and social scientists as well as by the noble Lord, Lord Hollick, and others today. Clearly, machines will take over much of the work of manufacturing and retail distribution. They can replace many white-collar jobs: routine legal work, such as conveyancing; accountancy; computer coding; medical diagnostics and even surgery. Many professionals will find their hard-earned skills in less demand. In contrast, some skilled service sector jobs—for instance, plumbing and gardening—will be among the hardest to automate.

The digital revolution generates enormous wealth for an elite group of innovators and for global companies, but preserving a healthy society will surely require redistribution of that wealth. There is talk of using it to provide a universal income. But it is surely better when all who are capable of doing so can perform socially useful work rather than receiving a handout. Indeed, to create a humane society, Governments should vastly enhance the number and status of those who care for the old, the young and the sick. There are currently far too few of these people, and they are poorly paid, inadequately esteemed, and insecure in their positions. It is true that robots can take over some aspects of routine care, but old people who can afford it want the attention of real human beings as well. Let us hope that we never get to a situation when we accept automata as substitutes for real teaching assistants reading stories to children with proper human empathy of the kind the noble Lord, Lord Reid, emphasised.

Not only the very young and the very old need human support: when so much business, including interaction with government, is done via the internet, we should worry about, for instance, a disabled person living alone, who needs to access websites online to claim their rightful government benefits or to order basic provisions. Think of the anxiety and frustration when something goes wrong. Such people will have peace of mind only when there are computer-savvy caregivers to help the bewildered cope with IT, to ensure that they can get help and are not disadvantaged. Otherwise, the digitally deprived will become the new underclass. Caring roles provide more dignified and worthwhile employment than the call centres or warehouses where jobs have been lost. Does the Minister think that it is possible to use the earnings of robots, as it were, to achieve Scandinavian-level welfare where the demand for carers is fully met?

Even if we have machines that can, effectively, interact with the real world, this will not be enough to ensure that they have human empathy. Computers learn from a “training set” of similar activities, where success is immediately “rewarded” and reinforced. Game-playing computers play millions of games; computers gain expertise in recognising faces by studying millions of images. But learning about human behaviour involves observing actual people in real homes or workplaces. The machine would feel sensorily deprived by the slowness of real life and would be bewildered. Only when this barrier can be surmounted—and perhaps it never will be—will AIs truly be perceived as intelligent beings, and if that happens, their far faster “thoughts” and reactions could then give them advantages over us.

Many experts think that the AI field, like synthetic biology, already needs guidelines for “responsible innovation”. Moreover, the fact that AlphaGo Zero achieved a goal that its creators thought would have taken several more years to reach has rendered DeepMind’s staff even more bullish about the speed of advancement. But others, like the roboticist Rodney Brooks—creator of the Baxter robot and the Roomba vacuum cleaner—argue that these projections will remain science fiction for a long time. Be that as it may, it is crucial to be aware of the potential of artificial intelligence, even though real stupidity will always be with us.

Photo of Viscount Chandos Viscount Chandos Labour 4:59, 19 November 2018

My Lords, I join other noble Lords in thanking the noble Lord, Lord Clement-Jones, for securing this debate and chairing so well the Select Committee, the report of which, on artificial intelligence, we are debating today.

I believe the report is a strong demonstration of the effectiveness of the ad hoc committees—to be renamed, possibly with the aid of some artificial intelligence, special inquiry committees—in addressing cutting-edge technological issues in the fast-changing society and economy in which we live. I draw the attention of the House to my entry in the register of interests, in particular as a trustee and chair of the investment committee of the Esmée Fairbairn Foundation—an investor in venture capital funds worldwide with significant holdings in AI companies—and as a director and shareholder of an AI-powered music company.

In referring to AI companies, I am reminded of an interview in the mid-1990s given by Andy Grove, the Hungarian-born co-founder and chief executive of Intel Corporation and author of the bestselling business book, Only the Paranoid Survive, a well-thumbed copy of which is doubtless in the library of 10 Downing Street. “Soon,” Mr Grove said,

“people will stop talking about investing in internet companies. They will invest in companies, almost all of which will use the internet”.

Similarly, I suspect, we will not think about AI companies for long but about companies generally, which almost universally will use AI. Indeed, in the evidence given by MMC Ventures to the committee, it was suggested that, already, only 10% of companies that it considered funding were pure AI developers while the remaining 90% were applications of AI.

At this stage of the debate, I shall concentrate on just one of the many questions arising from the committee’s report: how well placed is the UK in developing and applying AI? I have no hesitation in expressing my admiration for the excellence of research, expertise and work in the UK’s universities—Cambridge is singled out in the report, but is by no means the only leader in the field. It is a huge challenge to maintain, let alone strengthen, this position, even without the uncertainties and difficulties posed by Brexit. In the US, for instance, there is a virtuous circle of successful technology entrepreneurs acknowledging their debts to their alma maters with generous donations to their endowments. As long ago as 2001, Gordon Moore, another co-founder of Intel, and the author of Moore’s law, gave $600 million to Caltech, the California Institute of Technology which, I believe, is still the largest gift to an academic institution.

While British entrepreneurs are increasingly generous in supporting our leading universities, there is still a gulf between the resources available to them and their peers in the US. Not only is it essential, as my noble friend Lord Hollick has said, for EU funding in this area to be fully replaced, but significant real increases must be provided if the UK’s position is not to slip. In the Government’s response to the report, paragraph 53 scatters numbers like confetti but does not make it clear whether this challenge will be met. Will the Minister clarify the position?

The US, with its academic excellence and resource, the power of its technology clusters and the scale and expertise of its venture capital industry, presents massive competition, but China may be an even more formidable competitor, as my noble friend Lord Giddens and the noble Baroness, Lady Rock, have already suggested. Dr Kei-Fu Lee—arguably the leading technology entrepreneur in the country, whose PhD at Carnegie Mellon University in the 1980s was on AI—has calculated that 43% of all academic papers worldwide on AI have had at least one Chinese co-author.

Data privacy in China is substantially less well regulated, allowing data-driven, AI-powered businesses to operate highly effectively in areas such as banking and fintech. I do not advocate a regulatory race to the bottom—I leave that to the malfunctioning artificial intelligence of the European Research Group—but I draw these comparisons to emphasise that if we choose, rightly, to ensure that privacy, integrity and trust are prioritised in our approach to AI, we have to ensure all the more that we do not miss a single trick in providing the highest level of human and financial capital to companies developing and applying AI in this country.

The noble Baroness, Lady Rock, said, “We have the capital”, and the venture capitalist Eileen Burbidge, in her evidence to the committee, argued that there was no shortage of financial capital at any stage, whether seed, early or growth. Maybe, my Lords. In the last year for which comprehensive data is available, $6 billion of venture capital funds were raised in the EU, $26 billion in the US and over $30 billion in China. Of course, money is not everything but it sure as hell helps. Even more important than the quantity of money is the quality of money—the expertise and support of the venture capitalists who direct the funding to entrepreneurs. The scale of the VC funds raised in the US and China contributes critically to the depth of resource that the venture capitalists can devote to their investee companies. Once more, we face a formidable challenge in the UK in matching—let alone exceeding—that with the patient capital fund that is being established, painfully slowly, under the British Business Bank, doing little more than replacing the funding the UK has been receiving from the European Investment Fund.

I believe in the capability of the AI community in the UK. To return to the words of Andy Grove:

“Success breeds complacency. Complacency breeds failure. Only the paranoid survive”.

Photo of The Earl of Erroll The Earl of Erroll Crossbench 5:07, 19 November 2018

My Lords, I congratulate the noble Lord, Lord Clement-Jones, and the committee on a great report, which is crammed full of good advice, especially about the need for investment in our universities, where they teach thorough thinking, and in our innovative SMEs, where we can possibly unleash the full potential of the UK in this area. I declare a small interest in that I am about to join an ethical oversight group for the Proton Partners data vault, which will contain oncological data.

The first thing that struck me about the report was what it said about lifelong retraining. I can see exactly why this is necessary. I remember reading a report some time ago about people’s capacity to handle change as they grow older. Unfortunately, a lot of people find that very difficult. Certainly a lot of my friends do, and they regard me as rather odd because I have lived in the cyber world and am very happy to embrace change and enjoy it. However, I have discovered that a lot of people like to settle down within the boundaries of what they know, so I do not know how that will be handled. Will the human mind and its ability to handle change alter? I think we should study that.

The second thing that amused me in the report were the great figures on how many jobs we are going to lose. So far, I have noticed that every time there has been a technological improvement, the number of jobs has increased—they never seem to disappear; they just change. I remember that when bookkeeping software came out, it was said that accountants would be redundant. I will not go on with other examples as there is no point.

The third thing that I noticed in the report was the reference to anonymization; that comes down to a lot of things that people want. They want their privacy and are terrified either of big companies knowing too much about them and using their data for financial gain or of the Government drawing inappropriate conclusions about whether to restrict people’s ability to move around due to their patterns of behaviour. That may be a mistake. But the trouble is that, theoretically, we may be able to anonymise data, but, if certain things are anonymised properly, they are no longer useful. Epidemiological research is particularly like that. It is very often necessary to know where a subject is located to look for clustering effects in the data. To go right back to the first example, that is how they tracked down cholera to a particular street in London. The utility of the data can be destroyed.

That brings me to ethics, which is really what I wanted to mention. With true anonymisation, if you discover that a subject in a study could be saved if only you could identify them, should you save them? Or, in the greater cause of keeping the data for epidemiological study, should you make sure that everything is anonymous and accept that they will die? That brings me to the ethical bit. I was very interested in the speech by the noble Lord, Lord Reid, who, much better than I could, went down the road of thinking about the challenge of the AI system. It is, as he said, an alien thought process. It does not have empathy or a conscience built into it. It is therefore, by definition, sociopathic. That is a challenge. How do you get that into a computer? It does not think like us. Our little computers—our brains—are analogue computers that work on reactions and in shades of grey. They are not, at heart, logical. However much you give that computer fuzzy logic, it comes down to ones and noughts firing at the bottom. I have heard discussions between various neuroscientists about whether it is possible to programme empathy, but that does not matter. We do not have that at the moment.

It will be interesting when the computer that lacks empathy comes up with some conclusions. Let us fire at it the huge problem of NHS funding. One big problem is the unsustainable cost of end-of-life care. The Government are trying to dream up all sorts of wonderful taxes and so forth. Some research a long time ago by a Dutch university found that smokers spend seven times more in taxes during their lifetimes than they cost when they start dying of cancer. They also die earlier, so there would be less end-of-life care to fund. The AI computer will think logically. It will realise that there has been a huge rise in obesity. In fact, obesity-related cancers have now overtaken smoking-related cancers. I predicted the rise in obesity when people were stopped from smoking because smoking is an appetite suppressant. Therefore, if we can get more people smoking, we will reduce the obesity and end-of-life funding problems and we could probably drop taxes because there will be a net gain in the profits from people who smoke. And they would enjoy themselves, particularly bipolar people—smoking is great for them because it calms them down when they are hyper and, if they are a bit down and getting sleepy in a car, they can puff on a cigarette and not fall asleep, avoiding many accidents on the road. I can see just how the computer would recommend that.

Is that a sociopathic view? Does it lack empathy or is it logically what we should be doing? I leave that to noble Lords. I make absolutely no judgment. I am just trying to suggest what could happen. That is the problem because lots of these decisions will involve ethics—decisions that are going to cause harm. We have to work out what is the least-worst thing. How will we deal with the transfer of liability? I will run out of time if I go into too many things, but there will be biases in the system: the bias of the person who designed the way the computer thinks and analyses problems, or the bias—this is in the report—of the data supplied to it, which could build up the wrong impression.

These machines are probably intelligent enough effectively to start gaming the system themselves. That is dangerous. The more control that we hand over to make our lives easier, the more likely we are to find the machines gaming. The report on malicious intent, which my noble friend Lord Rees referred to, is very interesting and I highly recommend it. It was produced by a collaboration of about half a dozen universities and it can be found on the internet.

Much has been said about people and the big data issue. I was very involved, and still am, with the internet of things and I was the chair of the BSI group which produced PAS 212 on interoperability standards. The whole point is to get the data out there so that one can do useful things with it. This is not about people’s data but about the consequences for them of the misuse of such data. An example would be trying to enhance traffic flows and so on. It may be that the computer, to control the overall picture, could send someone out on a journey that is not in their best interests. They may be in a crisis because their wife is about to have a baby and needs to get to hospital quickly. There are issues around this area which come down to liability.

The root of it all is the real problem that complex systems are not deterministic. While you can get the same pattern twice, you do not get the same outcome every time. That is the problem with having rules-based systems to deal with these things. AI systems can start to get around that, but you cannot be sure of what they are going to do. It has always amused me that everyone is predicting a wonderful artificial intelligence-driven idyllic future where everything is easy. I think that it will probably get bogged down in the legal system quite quickly, or other issues such as safety may arise. By the time the HSE gets its teeth into this, I will be very interested to see what happens.

I think back to the late 1970s when ethernet came on to the scene. There were many predictions about the paperless office that would arrive in a few years’ time. A wonderful cynic said that that was about as likely as the paperless loo. All I can say is that the loo has won.

Photo of Lord Suri Lord Suri Conservative 5:16, 19 November 2018

My Lords, we have become used to seeing artificial intelligence as the enemy. Representations in popular culture have not helped in this. From “The Terminator” to the “Avengers” films, AI is presented as getting out of control and surpassing its creators. Authoritarian leaders seem obsessed with harnessing AI to bolster their military defences. Xi Jinping has pledged to become equal with the US in artificial intelligence by 2020 and overtake it by 2025. Vladimir Putin said last year that whichever country achieved dominance in AI would come to dominate global affairs. The combination of these threats, unknowns and challenges has come to fix in the public mind a mistrust of AI. I hope that this salient report and some of its recommendations can create a more positive image.

The uses of AI go far beyond military technology. Like the information revolution, AI is poised to sweep all before it and revolutionise working. This requires deep thinking and a proper strategy to cope with the loss of jobs. Many jobs can be created from AI, and the UK has the potential to become a global leader, but we must grasp the nettle as soon as possible. One of the biggest problems with Brexit is that it has swallowed up this Parliament and looks to swallow up the next one too. Germany has a full strategy in place for dealing with this new revolution while we have only just put ours into place. The Secretary of State for BEIS has made some good speeches but we need a dedicated Minister to really drive this package through, as with the industrial strategy.

Happily, we start from a position of strength. The reputation of our dedicated Technology and Construction Court and the flexibility of the common law have made the UK a regulatory leader, even if the legislative input has been slighter than might have been expected. Furthermore, our world-class universities have continued to churn out talented graduates who can attract existing firms and start them up themselves. AI is also one of the sectors that has spread wealth around the UK. AI firms are thriving in Leeds, Glasgow, Manchester and many other places without feeling the urge to clump in London.

One of the most significant threats I foresee is a fall in the number of people permitted to come to the UK to work in AI. More than almost any other sector, tech firms rely heavily on the ability to draw in talent from overseas. Post Brexit, our immigration policy must be totally focused on quality; AI represents unparalleled potential gains. The report shows that it could add an additional £630 billion to the economy by 2035. That is not an opportunity to pass up. We must be clear that those coming to work in AI are an enormous asset and we should be happy to welcome them.

The Government can also do more in their own affairs. In total, the Government produce more data than any other UK institution. When she was at Defra, the current Chief Secretary to the Treasury spearheaded a policy to release all its data in an open and machine-readable format. This was a stunning success and must be emulated across government. Obviously some sectors are more sensitive than others but, frankly, departments have a tendency to silo their data and not let private firms access it for free. We must look to ourselves first and do what we can to encourage domestic industry.

Photo of Lord Brooke of Alverthorpe Lord Brooke of Alverthorpe Labour 5:21, 19 November 2018

My Lords, I am grateful to the committee for its excellent report. I am also grateful, in the main, for the Government’s response. I must confess that I could not say that about their response to the committee’s concern about inequality, which was dismissed in two sentences, virtually. When we look at what has happened in the past 50 years and how we have changed fundamentally on the issue of equality, as well as the concerns that may come with these changes, we cannot dismiss that issue so lightly.

I particularly want to speak about the impact on the labour market, whence I originate. In recommendation 39, the committee rightly states:

“There is an urgent need to analyse or assess, on an ongoing basis, the evolution of AI in the UK, and develop policy responses”.

Clearly, there will be changes in the labour market on a scale that we have never witnessed. A lot of people are expressing worrying and alarmist reports about the likely consequences; we have heard a variety of figures today about the number of jobs that may disappear. I am one of those who believes that new technology creates additional jobs, although in different places. That comes from experience as a trade union official over the years. However, it does not necessarily follow that this will run for ever, given that the pace and depth of technology may be on a different scale to what we have experienced.

One of the problems we have found with many of the recent changes is that new jobs have been created on a big scale but they have been of extraordinarily low quality and exceptionally low pay. The result is that our workforce is far from happy, compared with the satisfaction that people had in work 40 or 50 years ago. What is created is very important. Like the noble Lord, Lord Rees, I see this growth as an opportunity for many more people working in the public sector. That then raises the question of how we can raise the money to pay for more people in the public sector.

I suggest to the Minister—this was looked at lightly by the committee—that we should look at the possibility that people might not want to work and that as AI develops we might move to a stage where a question mark arises over the joint policy being pursued by political parties that we should seek full employment. Will it be justifiable in the longer term to pursue such a course? Is it not a possibility that many people will not want to work and might look for a different way of relating to the state and other people than we have had in the past? Some countries are already starting to explore the possibility of minimum incomes being provided for all citizens. This was run recently in Italy and I think was quickly dismissed there, but the OECD has done some work and has been reasonably positive in suggesting that it should not be dismissed totally out of hand and is worth pursuing.

My question on employment to the Minister is, as I am sure he will be continuing to keep the numbers under review, whether some thought might be given in the longer term to an alternative system of rewarding people who do not go to work. In part, we are already rewarding people who work with subsidies from the state using tax credits, which Gordon Brown introduced. They have grown and grown. The question for those people, who are doing low-quality jobs for low pay, is whether there might not be something better for them to do in society than they are doing at the moment. This will not happen overnight and neither will many of these changes, but is the Minister’s department doing some longer-term thinking about it? Is it looking at what some other countries are doing and examining what research is being undertaken by organisations such as the OECD? Is this a backstop that we ought to be thinking about in the longer term?

Labour’s view is that we should reduce the length of the working week. This was announced recently. Whether or not people want that I am not entirely sure. Given a choice between having more freedom to do different things and having a routine job for shorter hours they might opt for something quite different. However, the Government’s position on this is relatively unclear, so I would be grateful if the Minister could give us some explanation.

I came to this debate having led a debate in September on trends and changes in addiction. I was drawn to it particularly by the concerns that are increasingly being expressed about children and the internet. I was also involved in the House of Lords inquiry into information technology years ago. We never foresaw for one moment the changes that would come with hand-held mobiles and the changes affecting children. I suspect, with respect, that even much of this report might be overtaken very quickly in other areas that we never foresaw.

Last week we had a very good debate in this House on social media services, in which the noble Baronesses, Lady Kidron and Lady Grender, and my noble friend Lord Stevenson, who led the debate, highlighted some of the particular problems arising that will have a major impact on the way society is developing. I have also read an outline of some of the work done by the noble Lord, Lord Rees, on the future prospects for humanity. He did not say a great deal on that today, but I tend to share some of his views that some changes are more negative than positive.

I come back to addiction and look at what is happening in China, where sex dolls are being produced. As recently as a month ago a major exhibition was held about them. The police are having to impound imported child sex dolls—in the past two years, they have been taken at the ports. We see a range of dolls being offered to adults. The face of an individual can be replicated; they have material that almost replicates flesh. One can get a doll that speaks or responds to whatever one wants. These are major changes.

I look to the Church in particular to see the challenges brought to bear when people spend so much of their lives on their own, perhaps in their bedrooms. They do not want to communicate with other people; they can only communicate online. The skills for connecting with others have gone, yet what will happen if they stay there and what problems will arise? I think that AI will lead to a major growth in the incidence of mental health problems, as we are now detecting in many areas. That is where extra work will have to be done and where the human factor will come to bear, hopefully in helping one another. These are big issues and we are only scratching the surface.

Photo of Lord Freyberg Lord Freyberg Crossbench 5:31, 19 November 2018

My Lords, I, too, congratulate the noble Lord, Lord Clement-Jones, on securing today’s debate and thank him for his work as Chairman of the Select Committee on Artificial Intelligence. The committee’s report makes an important contribution to the wider debate about how the United Kingdom can position itself to be a world leader. Along with the Government’s response, it raises a number of important questions that I want to explore today.

Like the noble Lords, Lord Kakkar and Lord St John of Bletso, and others, I want to focus on our handling of the data that will drive new advances in artificial intelligence and, related to that, its potential to deliver better healthcare outcomes. I am pleased that the committee’s report looks in depth at healthcare and at how the Government might support the development of leading-edge policies to complement advances in AI. Several recommendations caught my eye, including those concerned with ensuring that we maintain public trust in the safe and secure use of personal data, and others which explore how we might harness the value of healthcare data.

The Government’s response gives the House a clear indication of their thinking and makes it plain that they will continue to evolve a regulatory framework that protects patients’ data. They also confirm that work is under way better to understand how to obtain value, in the broadest possible sense of the term, from granting access to patient data for research and innovation.

I presume that an important facet of the latter is the Government’s recently published code of conduct for data-driven health and care technology, which includes mention of the commercial approaches that individual NHS trusts might adopt in discussion with third parties making use of big data to advance machine learning and evolve AI. At present, it is a voluntary code, because I understand that Ministers want to encourage organisations to sign up and feed back on the initial draft.

However, I hope that the Government will think seriously about making the next version mandatory so that the provisions for safeguarding patient data, while extracting optimum value, are placed on a more robust footing in the near future, in particular given last week’s news, mentioned by the noble Lord, Lord Clement-Jones, and others, concerning the absorption of DeepMind’s Streams team into Google.

As I have said previously in the House, realising the potential value of healthcare data represents a time-limited opportunity in a globally competitive market. There is currently no clearly agreed strategy which sets out how the NHS and UK plc intend to benefit from providing access to and usage of the broad-ranging data assets that the NHS controls. I was pleased that the Treasury published Getting Smart about Intellectual Property and Other Intangibles in the Public Sector, which was integral to the Budget.

For those noble Lords who are unfamiliar with it, the document recommends the establishment of a central repository detailing government knowledge asset holdings and their value; guidance to design and implement best-practice protocols; protection and commercialisation of public sector knowledge assets; and registering intellectual property assets with the most commercial potential so that their value to the United Kingdom is maximised. These recommended next steps are to be welcomed, and I know that my noble friend Lord Mitchell is in agreement since they echo the thrust of the amendments that he introduced during the passage of the Data Protection Bill earlier this year. However, I would like to understand who will be leading this work and how they propose to interface with the Department for Business, Energy and Industrial Strategy as well as the Department for Health and Social Care. I also note that the recommended next steps are not currently attached to a clearly defined timetable against which progress might be measured.

Once again, we are left wondering—to quote the subtitle of the report of the noble Lord, Lord Patel, on the life science industrial strategy—who is driving the bus and whether it will be on time. Might the Minister clarify as much in his response? If government policy and guidance are misaligned and do not act as an appropriate check before the proverbial horse bolts, there is a risk that public trust will be eroded, which could in turn act as a brake on the innovation on which I am sure many of us agree improvements to patient outcomes now hinge.

Allied to this, the committee’s report recommends that the Information Commissioner’s Office work closely with the new Centre for Data Ethics and Innovation on the establishment of “data trusts”. This is also to be welcomed. However, if data trusts are to work, it is appropriate that data subjects have a clearly defined stake and say in such initiatives before the outset. Of course, further discussion is needed before we alight on the right balance between individual involvement in, control of and, potentially, reward from healthcare data sharing versus what could otherwise be leveraged by the state for public benefit were healthcare data, in particular, to be approached as a “sovereign asset”. In the interim, can the Minister confirm how the Government intend to facilitate this?

As was said by the noble Baroness, Lady Kidron, and others, the big technology giants possess a unique concentration of power. The Government will need to demonstrate leadership and take urgent action to protect patient data while ensuring that a prime opportunity to enable leading-edge innovation in health and care is not missed, as happened in the past with care.data. There is currently no regulation or strong enough framework to manage what seems to be a wild west-style data gold rush on the part of the private sector. While the hiatus continues, commercial organisations are taking advantage.

If patients are to benefit from the introduction of artificial intelligence, Ministers must be proactive. A great deal has been said about the need for ethics frameworks today, which I agree must be developed in parallel. However, to quote Professor Luciano Floridi, professor of philosophy and ethics of information and director of the Digital Ethics Lab at Oxford University:

“We’re told we can’t regulate technology because regulation can’t keep up, while at the same time, we shouldn’t regulate because it will destroy its innovative potential—logically, it can’t be both.”

I urge the Government to make it plain who will be accountable to the public on this subject and by when they plan to introduce suitably robust provisions.

Photo of Viscount Hanworth Viscount Hanworth Labour 5:39, 19 November 2018

My Lords, artificial intelligence is a concept that is not amenable to a precise definition, albeit many have been attempted. In a narrow sense, AI denotes the ability of machines to mimic the cognitive skills of human beings, including learning and problem-solving. In a broad sense, it denotes any decision-making that is mediated by the computer. The popular concept of AI has been greatly influenced by the test proposed by Alan Turing in 1950. Turing asserted that if an individual working for an extended period at a keyboard could not reliably determine whether their respondent was a human or a machine when it was in fact a machine, then that machine could be said to exhibit artificial intelligence.

This notion of artificial intelligence places a high requirement on the machine. It also engenders the fear and the anxiety that, with the advent of AI, people will be manipulated, increasingly, by impersonal and malign forces, devoid of human empathy and understanding. The right reverend Prelate the Bishop of Oxford, among others, alluded to such anxieties. A different and a carefree definition of artificial intelligence has been advanced by Larry Tesler. He has observed that AI connotes anything that has yet to be achieved by computers. What has already been achieved, such as speech recognition or optical character recognition, is liable to be regarded merely as computer technology.

Doubts about the definition are reflected in the introduction to the excellent report from the Select Committee on Artificial Intelligence by a word cloud illustrating definitions of artificial intelligence. The report also contains a brief history of the progress of AI, in which mention is made of the aspersion against James Lighthill that he was responsible for arresting its development in the UK via an adverse report delivered to the Science Research Council in 1973. Lighthill merely asserted that AI was not a coherent academic discipline and that, as such, it did not warrant specific funding. It should also be said that some of the concepts that appear to be at the forefront of modern endeavours, such as artificial neural networks and Bayesian learning, have been around for a very long time.

Notwithstanding these doubts about a definition, the committee has produced a well-focused report. Faced with the rapidly increasing application of computers in diverse spheres of decision-making, it highlights the hazards of their misapplication and advocates a wide range of measures that should be taken to counteract the dangers. To a lesser extent, it identifies steps that can be taken to maximise the benefits arising from the application of computers in decision-making.

Some of the hazards that the report has identified are well known. Among these is the criminal use of computers, commonly involving fraud and impersonation. These are too well known for me to dwell upon them at length: indeed, Members of Parliament are regularly alerted to such hazards. The threats to our democratic process from fake news and from personalised campaign messages conveyed by digital media have also achieved prominence recently. The novelty in these threats is in the power and the prevalence that they have achieved in consequence of the hugely increased processing powers of computers. The hazards that I wish to highlight are of a different kind. They stem to a large extent from the lack of numeracy on the part of many of our decision-makers, who may not have had any scientific education.

The first of these hazards is a tendency to spurious quantification, which might be described as an attempt to measure the unmeasurable. To many, it must seem that a hallmark of modern management is decision-making based on aggregate statistics and on the models of human and social interaction that can be derived from them. The educational sector at all levels has suffered from the ills of spurious quantification, which is most crudely represented by educational league tables. It is proposed that the multifarious activities of an educational establishment can be summarised in a single index purporting to represent the quality of its provision, and that this index can be used to determine its ranking in a long list of similar establishments. Aggregate measures of quality or performance are compounded by applying differential weights to incommensurable quantities and by adding them together. Chalk is mixed with cheese in arbitrary proportions to produce an indigestible amalgam.

For civil servants and administrators, the advantage of such summary measures lies in their ability to simplify the decision-making process, which often concerns financial and other kinds of support that must be given to the institutions. The statistics allow individual institutions to be removed from view and allow remote and uninformed decisions to be taken without any attendant qualms. I sometimes wonder whether the decision-makers would satisfy what I describe as the inverse Turing test—can they be clearly distinguished from robots? The onus of gathering the information that gives rise to the spurious quantification, or of producing accompanying self-justifications, falls upon the institutions in question. The demands can become so great as to impede their proper functioning.

For a long time, the primary and secondary tiers of our educational system have been subject to decisions arising out of their rankings. More recently, our universities have been subject to the same methodology. I have a clear view of the consequences, which I consider to be disastrous. The emphasis on statistical analyses has, of course, been fostered by the availability of computers. The lack of quantitative skills on the part of those who handle the information and their inability properly to interrogate it is a major hazard. The problem has been highlighted by the noble Earl, Lord Erroll.

Had I time to describe them fully I would dwell at length on some of the fiascos that have arisen from the Government’s attempt to adopt computerised information processing. One of the most prominent examples concerns the initial attempt by the NHS, some years ago, to create an integrated system of patient record-keeping. A large and unrecoverable sum of money was given to an American software company, which created nothing of any use. The episode illustrated one of the hazards of outsourcing. It was proposed that it would be far more efficient for the organisation to use the services of experts in matters of computing than to rely upon its own expertise. However, if there are no resident experts within an organisation, then it is usually incapable of assessing its own needs, or of envisaging a means of satisfying them. In that case, it is liable to be vulnerable to confusion and exploitation. The noble Lord, Lord Kakkar, talked eloquently on that issue.

To those with whom I am serving on a Lords Finance Bill Sub-Committee, it seems clear that HM Revenue and Customs is in the act of creating a similar fiasco in its programme for making tax digital. It seems to me that, far from being new and unprecedented, the principal hazards of artificial intelligence are both familiar and mundane. They will be overcome only when we face up to the need to devote far more resources to enhancing the mathematical, the quantitative and the computer skills of our nation. The issue is a perennial one: are we to be the masters of our technology or its slaves?

Photo of Lord Janvrin Lord Janvrin Crossbench 5:47, 19 November 2018

My Lords, I add my thanks to the noble Lord, Lord Clement-Jones, for instigating this debate. I congratulate him and his colleagues, their advisers and staff on their excellent report. It is clear, comprehensive and very thought-provoking. The Government have rightly taken it seriously as an important contribution to the realisation of their industrial strategy, one that sets artificial intelligence and the data revolution as one of the four grand challenges to be addressed in shaping the future of this country.

The report gives us plenty to chew on among 74 recommendations under 26 sub-headings in eight substantive chapters. At this stage of the debate brevity is at a premium, but I do want to flag three areas: skills, governance and a subject mentioned right at the start of the debate by the noble Lord, Lord Holmes—public engagement. Under the heading of skills I want to address two separate issues. The first is the need to ensure that we have the highly skilled AI developers this country needs to allow us to be at the forefront of this revolution. The second, a point made by many speakers in the debate, is the need to address and reskill those whose jobs are put at risk by the new technologies such as AI.

On the first point, the report rightly devotes a number of recommendations to this crucial issue, particularly around increased funding for postgraduate studies, what I would call the diversity and inclusion imperative, and the expansion of the visa regime to attract the best talent from overseas to work in this country. I also strongly support the report’s recommendation for short postgraduate conversion courses, perhaps developed by the Alan Turing Institute, to allow students from other disciplines to have a grounding in the application of AI. AI is not an end in itself but a means to an end in other fields, as we have heard, such as medicine, law or the creative industries. The fourth industrial revolution is about the blurring of lines between disciplines. I would welcome the Minister’s comments on plans to address what I would call the interdisciplinary challenge; for example, through conversion courses.

On my second skills point—the need to reskill those whose jobs are lost through technological disruption—this was a major recommendation in the digital skills report of 2015, and it was good to see the Government picking this up in their plans for a national retraining scheme announced in the Autumn Budget last year. It is essential to ensure that the private sector plays an active part in funding these programmes, with collaboration at the local and regional as well as national level. I ask the Minister to confirm that industry is fully involved in the plans for retraining and lifelong learning that have been mentioned so often in the debate.

The second general area raised by the report is the question of effective AI governance. This is well covered in chapter 9—“Shaping artificial intelligence”—and covers government engagement, ethics and regulation. I note that the government response is in the name of two government departments—BEIS and DCMS—as well as the Office for Artificial Intelligence. We also have the AI council to give strategic oversight, the Alan Turing Institute leading on research, and the new centre for data ethics and innovation advising on how data and AI are used and regulated. We are told that AI policy-making will be part of the existing industrial strategy governance and decision-making processes. I agree with the committee that it needs to be clear who is driving policy in this area, both at Cabinet level and below, and how the roles and remits of these various bodies are defined. Clarity is crucial to allow government, industry and the academic world to collaborate effectively. It is vital when it comes to funding, accountability and evaluating success.

I also share the committee’s view, endorsed frequently in the debate, about the importance of an ethical framework for AI policy-making. As we have heard, there are general ethical implications around liability, responsibility, fairness and transparency to be thought through. The whole area of ethics, regulation and defining standards is one in which the UK has often been at the forefront and I hope that that will continue to be the case in this area. The new centre for data ethics and innovation will have an important part to play in this.

My third and final general point is about ensuring that there is wide public understanding of the implications of AI, as we have heard frequently in the debate. The report draws attention to this area in a number of its recommendations. The challenge is to build public trust in a technology where—to repeat the word used in the report—“explainability” is at a premium. There is a job to be done, led by the Government, to ensure public engagement with regard to the risks and rewards of AI and data analytics. Much of the work may well be around reassuring the public on how data is used, as we have heard. Here I draw attention to the work of Professor Wendy Hall and her proposals concerning the importance of data trusts in the future. I ask the Minister: who in government will co-ordinate the public engagement programme that has been referred to so frequently today?

In conclusion, I thank again the committee and its staff for this report—and, indeed, the Government for finding time to debate it. One of the many disturbing features of our present politics is its ability to suck the life out of debates on other long-term challenges facing this country. It is refreshing to be talking about one of those challenges today.

Photo of Baroness Grender Baroness Grender Liberal Democrat 5:55, 19 November 2018

My Lords, I particularly thank my noble friend Lord Clement-Jones for so ably chairing the Select Committee and leading this debate. It is a tribute to his indefatigable energy and intellect that the report was so well received. Indeed, when we had a training session on neural networks, he left us all in his wake and proved why he was such an able chair. It is also a tribute to the excellent staff already named. As the noble Baroness, Lady Rock, and the noble Lord, Lord Giddens, mentioned, it was an absolute pleasure to work alongside each other.

Thanks to all the people described, it is fair to say that the report has become a touchstone for a comprehensive view of all things AI in the UK. All of us who served on the committee would agree that the depth and breadth of engagement we have been involved in following publication has been extremely encouraging, if not breath-taking. Attending the CogX conference alone was inspirational for me, and the appointment of Tabitha Goldstaub, co-founder of CognitionX, to the AI council is a very welcome move. It is not often that our Select Committee reports trend on Twitter but we did on the day that we published the report, and the media coverage was very positive. Of course, there is always the exception that proves every rule. Despite the fact that we were clear throughout the report that this was not about robots, we could always rely on the Daily Mail and its headline: “Killer robots could become a reality unless a moral code is created for AI, peers warn”.

Of course, the role of politicians is a critical question in a report such as this. Artificial intelligence is one of those moral, economic and social issues where politicians have to engage and set the all-important frameworks so that a business sector can thrive but at the same time society is enhanced and protected. As the noble Lord, Lord Holmes, pointed out, we looked particularly at the model of the Human Fertilisation and Embryology Authority, set up in the wake of the Warnock committee’s work on the moral and ethical debate as technology progressed. As an IVF mum, I could not be more grateful for that combination—set out by this Parliament—of ethical framework and technological progress.

Now, as then, this whole new world of artificial intelligence is crying out for the right kind of strategic leadership, as was highlighted by the noble Baronesses, Lady Rock and Lady Kidron, and the noble Lord, Lord Janvrin. The committee highlighted that a key role for us was to clarify which organisations needed to take up that role. I apologise for the plug but the table at the back of the index is where we put that, and we were very proud of the work that the committee did on that. That is why the overview of the Office for Artificial Intelligence is so critical, why a national policy framework is so essential, and why the role of the AI council is key. I particularly look forward to hearing more detail from the Minister about the progress that the AI council has made to date. However, all these organisations that I have described—and many more—need an ethical framework, ideally one that is global. As my noble friend Lord Clement-Jones explained, the time for action on this is now.

The right reverend Prelate the Bishop of Oxford, who I noticed delivered the five codes from his high-tech tablet, described them very well. He steered us extremely well on the committee over a mountain of AI evidence on those five overarching principles that ensure that AI needs to be targeted at the common good. The noble Baroness, Lady Rock, referenced the national centre for data ethics and innovation and the potential for that organisation to build trust. Certainly on these Benches we see the governance of that organisation, and data trusts in particular, as critical to building public confidence. In fact in a previous article the noble Lord, Lord Giddens, described these codes as a new Magna Carta and I agree. One of the most important questions for this Government to answer in the light of this report remains: will they consider developing an ethical code for the development and use of AI? Do they intend to develop that code at an international level? When President Putin says:

“Whoever becomes the leader in this sphere will become the ruler of the world”, he is not kidding. We very much need to be part of a global movement that sets the parameters for AI.

The noble Baroness, Lady Kidron, focused on data ownership, particularly among children, and on the availability of open data. My noble friend Lord Clement-Jones, the noble Lord, Lord Holmes, and the noble Earl, Lord Erroll, talked about the anonymisation of that data. In the committee, we looked at the portability of data a great deal and found that it was critical, as was the ICO having adequate resources, as the noble Lord, Lord Hollick, mentioned.

However, it is not sufficient to regulate for the here and now; we must equip people for the future. Education was discussed a great deal. The evidence that the general public put this issue in the “too far in the future” or “too difficult to explain” columns—particularly noticeable among parents of school-aged children—should be alarming to government. For adults, a significant government investment in skills and training is all-important or many will miss out on the AI opportunity altogether. I felt it was disappointing that the Government did not engage with the recommendations in this part of the report. What if the general public do not understand the opportunities of AI? For instance, people running SMEs might have little understanding about the take-up, and therefore the market that they can provide, for start-ups in AI if they are not engaged or do not understand that AI can be relevant to them. Only last week, I spoke to a group of parents in a school and when I explained about AI, their eyes glazed over. But when I asked, “Do any of you use an app on your phone to tell you when the bus is about to arrive?”, all of them said yes. It is about how we make this relevant to people so that they understand that they need to get engaged—to be educated and be part of this revolution, not left behind by it.

With regard to investment in business, I noticed a different figure for how much China is investing so I feel that I must contribute it to the debate. The Lords Library briefing says that China is investing $425 billion in AI by 2020—a different figure from that used by the noble Baroness, Lady Rock, and the noble Lord, Lord Giddens. I am sure that someone, perhaps the Minister, will correct us on that.

The future of work will look so different and we must equip our young children in schools to be ready for that future. The ethical must be a part of that curriculum as well and, while the curriculum has made a significant step forward, teacher confidence and parental engagement must improve. We also must continue to recognise the value of critical thinking skills from an early age, which means continued emphasis on humanities as well as learning coding from key stage 1. By the way, five and six year-olds in key stage 1 are now learning how to remove a bug from a code. Teacher confidence on how to teach that is not quite there yet and we need to improve it. Will the Minister explain why our recommendations about ensuring that teachers have time to learn these skills was rejected in the Government’s response?

On education, the noble Baroness, Lady Rock, described the potential for commercial and academic partnerships and the noble Lord, Lord Brooke, spoke of the disruption to employment and the danger of greater inequality. We agree. The noble Lord, Lord Kakkar, gave an excellent description of how this technology can work in the health service but ringing in my mind is the evidence we took from Professor Susskind, who explained to us that even surgeons will eventually be redundant through AI. In response to the noble Lord, Lord Reid, when I asked about empathy it was Professor Susskind who said in clear terms that algorithms can sometimes spot when older people are in distress at a far faster pace than other human beings. I leave that with your Lordships; I am not saying that I necessarily agree.

If AI is to work in the future, above all it has to represent everyone. For me, one of the most striking phrases given to us came from Olivier Thereaux, who said:

“We take bias, which in certain forms is what we call ‘culture’, put it in a black box and crystallise it for ever. That is where we have a problem. We have even more of a problem when we think that that black box has the truth and we follow it blindly”.

I was delighted to hear from the noble Lord, Lord Reid, who nailed the danger of bias. We have already had a description of a glass box and the potential that provides. The noble Lord, Lord St John of Bletso, talked about the potential for blockchain technology to overcome some of these problems.

Simply put, if an algorithm followed the gender balance we currently have in the House of Lords, this place will continue in its failure to reflect the rest of the population it serves. We had many recommendations about overcoming bias in our report but, in particular, I would like the Minister to address the very simple and low-cost proposal to have an industrial strategy challenge fund to stimulate the creation of tools to test datasets and ensure that they are representative. I ask him to take another look at that proposal because if AI is the future, then we cannot start from here with the current data that is fed into it. As my noble friend Lord Clement-Jones and the noble Baroness, Lady Rock, said, we absolutely must encourage greater diversity.

There is tremendous opportunity and, of course, threat in artificial intelligence. But in the UK, whatever the outcome of Brexit, there stands a real opportunity for us to shape that future by leading in ethical and economic development so that everyone benefits.

Photo of Lord Stevenson of Balmacara Lord Stevenson of Balmacara Opposition Whip (Lords) 6:07, 19 November 2018

My Lords, I thank the committee very much for its report. From the debate today, it sounds as if it was a good event in itself. When it was meeting, I would quite often see what I now know were members of the committee in the corner discussing arcane issues about artificial intelligence. That is something you do not often find in your Lordships’ House but it was refreshing and welcome, although the technology sometimes left me a bit lost. This is a good report, with nearly 80 detailed recommendations. Such productivity is not often matched around the wider economy but obviously we can do here what others are not able to do. It demanded a high-level response from the Government but, as others have said, their response does not quite match up to the quality of the report.

I am sure I am not the only person present today who was extremely pleased when the noble Lord, Lord Clement-Jones, did a rather brilliant précis of the report in his opening speech. Obviously, as a lawyer he is used to this kind of thing—gathering together ridiculous facts, bringing them together in a convincing narrative and winning us over with the skill of his language and the brilliance of his metaphors—but I am sure the reality is that others must have experienced the report as I did. I rather struggled with it and certainly struggled with the evidence, some of which was way out of my league. But when the noble Lord finished and sat down, I felt that I had been there and owned it.

I thank all speakers who have contributed. It is interesting to note that 13 of the 20 speakers before me were not involved directly in the committee. That is unusual and worth remarking on. Normally these committee reports, good and worthy though they are, tend to be restricted to those who have been through the pain of the events and want to get it out of their system by speaking in the debate. To get so many external speakers wanting to contribute to this debate is a very good test of this report reaching out. It has generated a very good debate, one of the best of its type. Committees are the gold standard of our work, and their reports are very important. They travel out and do the job of explaining to people what we do. It is in the best traditions of the House to make sure that we issue reports and discuss them. It is good that the Government were able to respond quickly enough for the report at least to be within recent memory. Like the noble Lord, Lord Janvrin, I think it is nice to be talking about something real and not related to B-R-E-X-I-T—or not particularly.

There were many good speeches so I shall not select any to make particular points and I am not going to follow the noble Baroness, Lady Grender, in going through them. I want to mention two contributions which for me marked out this debate: my noble friends Lord Reid and Lord Browne pulled off the rather difficult trick of opening up a much wider perspective about some of the issues that were raised in the report. One of them spoke on the ethical and philosophical issue, which was very interesting and reached out to everyone here, and one of them spoke on a very pragmatic and potentially dangerous issue. Both of them were talking outside the box.

I shall very quickly cover some individual points that the Government should respond to and have not done so well in response to the report. Our whole approach to AI and our ability to make it one of our winning combinations in this country will not happen unless there is proper physical infrastructure. The report states:

“We welcome the Government’s intentions to upgrade the nation’s digital infrastructure as far as they go”.

I think that is the point. The report goes on:

“However, we are concerned that it does not have enough impetus behind it to ensure that the digital foundations of the country are in place in time to take advantage of the potential artificial intelligence offers”.

This takes us back to issues that were discussed in other places and also raises a question about the responsibility in government for this. The Government’s response, although perfectly adequate, is just a list of announcements that they have previously made about money. It does not pick up the issues that underlie what I think the report is about: we do not think hard about what is ahead of us, what facilities are going to be required for mobile, fibre to the home rather than to the cabinet, and the 5G revolution that is with us. We are not going to be ready to take advantage of any of the stuff that should be coming down the track. What are the Government doing about this? Is it not time to get away from the ridiculously unachieving universal service obligation and replace it with something that takes us to the 1-Gig economy so that we are talking about a standard which will allow those who wish to participate, whether they are SMEs or big companies, in the city or rural, to have the coverage, contention ratios and competition driving the rollout of this technology that will really make a difference? I look forward to hearing the Government’s response.

As for who drives this policy, the issue is the confusion of bodies that seem to be being set up. There is an AI council, an AI department, the Centre for Data Ethics and Innovation, the GovTech Catalyst team and the new Alan Turing Institute. I could not make out from the government response where they all sit. I think the committee was urging the Government to try to be proactive in policy to harness the potential and to mitigate the risk, but it also points out that they will not do that unless there is clearly leadership at the top. Many other noble Lords mentioned this point. If there is to be a national policy framework for AI to be in lockstep with the industrial strategy, it is surely not sufficient simply to say that we have an industrial strategy and that will do it. We are saying that AI is the key to lots of things within the industrial strategy and it needs to be given its own responsibilities and arrangements to take it forward. It is important that the Government own this as a separate part of that activity. We need to think further about which departments are involved, which Cabinet committees will be responsible for it and how the various elements between DCMS, BEIS, health and other departments are going to be handled. Where does this co-ordination take place and how will that be taken forward?

On the question of an AI code, the recommendations again are to be supported. It needs to be something that will give guidance and regulatory security to the companies that want to become involved. The debate today has highlighted the needs here. The Government’s response just states:

“There are a number of high level themes emerging around the ethical and innovative uses of data … some of which are highlighted within the Committee’s report”, and that some of them are not very new. I do not think that is getting behind what the recommendation was trying to do, which is to say that there will be a competitive advantage for the companies involved and for the country if we have a clear statement of what is expected of them and how this will be taken forward.

Who will review the policy and how will it be done in a way which will be a feedback loop? The committee’s report states:

“For the UK to be able to realise the benefits of AI, the Government’s policies, underpinned by a co-ordinated approach, must be closely monitored and react to feedback from academia and industry where appropriate”.

I do not see where that is going to happen in the structures that are in the Government’s response. Will the Minister respond on that point in particular?

Enough people have talked about the problems about DeepMind and Google to ensure that the Government will respond on that, and I look forward to it. They are clearly examples that should send shivers through all the work that is currently going on, all the discussions we had during the passage of the Data Protection Bill and all the thinking that has been done since then about how data is to be organised and made secure, how personal data is to be protected and how the value in that data is to be unlocked in an effective and efficient way. This links into a section in the report about data trusts which was very interesting, but to which the Government’s response again did not match up. Will the Minister explain the thinking a bit more? The issues are well discussed, the balance between the practical issues and the ethics is rehearsed, but the idea that this will be a solution to all the problems that companies and individuals will have in their data being used is naive. It is very important that AI systems are trusted and used, but they will not be unless we can make sure that those who have responsibility for the data and those who own the data are able to get the satisfaction they need out of that. This goes back to discussions we had during the passage of the Data Protection Bill on whether there would be copyright in individuals’ data. I will be interested to know whether the Government have anything to say on that and on whether it is possible for an individual who has personal data to be a data controller for that. Both those solutions have a lot of advantages in relation to data trusts and how they might be used, and I will be interested to know whether there is any further information.

That links into data monopolies and who owns the data once it has been given into a system, whether or not those who have given it know that they are doing so. If that is the case, do we have the regulatory authority to make sure that the monopolies that will emerge can be controlled effectively? Others have spoken about that.

On autonomous weapons, I do not think there is anything more to say from the Dispatch Box in relation to my noble friend Lord Browne’s comments. The Government might wish to come back to this because it seriously worries a lot of people and should be dealt with.

On the related issues about the impact on the labour market and the need for much more work, I do not think the Government’s response is up to it. On the impact on social and political cohesion, there are too many issues to be raised specifically, but again, to rely on a digital charter is not going to get the answer to the questions that people have been raising here today.

Finally, there is the question of inequality. There is always concern about those who have public office and concern about that was specially brought out by the report in terms of the risk that greater societal and regional inequalities will emerge as a consequence of the operation of AI. That was not dealt with by the Government’s response.

This is a very good report and it was matched by a very good debate. There are issues that need to infuse virtually all aspects of what we do in the industrial strategy, but they go much wider than that and deal with personal and ethical issues which also have to be looked at. The Government said in their response that they broadly accept the principles in the report. The sad thing is that there are very few examples of actions that have been taken to deliver them.

Photo of Lord Henley Lord Henley Parliamentary Under-Secretary (Department for Business, Energy and Industrial Strategy) 6:19, 19 November 2018

My Lords, I start by offering my sympathy to the noble Lord, Lord Stevenson, for the state of his voice. I think mine might be in the same state. It would be nice if all these scientists in AI, life sciences or wherever could do something for the common cold—I think that is a plea that many of us would put forward.

I thank the noble Lord, Lord Clement-Jones, and congratulate him on his report. I am trying to think of the right word to use about being invited to appear in front of his committee with my right honourable friend Matt Hancock, who has since been promoted twice, whereas I have not. We were in a state of awe but thrilled to be asked to give evidence, and I hope that we helped.

The report has been a very useful part of the general discussion that we have had in this area. The noble Lord, Lord Rees, said it would be rash to predict what is going to happen over the next 20 years. However, as the report makes clear in its title, AI in the UK: Ready, Willing and Able?, it is important that we get ourselves in a position to be ready, not for exactly what is going to happen but for a whole range of possibilities as to how things will develop over the next 20, 30 or whatever years. That is why, back in early 2017, as part of the beginning of the industrial strategy and the UK Digital Strategy, the Government commissioned their independent review. I am sure that that is why this House took the decision to establish the committee that the noble Lord so skilfully chaired—I offer my congratulations again to him and all those who served on it—to look at the economic, ethical and social implications of advancing in ethical science.

The independent review that we then commissioned under Professor Wendy Hall and Jérôme Pesenti published its evidence in October of last year. Our industrial strategy came out in November, almost a year ago. In April of this year the noble Lord published his committee’s report, and within the appropriate number of weeks we published our response to it, in June 2018. As the noble Lord put it, he gave us a mixed scorecard but said it was a good start. I hope that since then we have done quite a lot more and are now making progress. We have announced the chairmanship of the AI council, which will go on to be set up.

In his speech the noble Lord set out five threads as a way of putting his speech together, the fifth being the unifying thread dealing with ethical development, and five suggested principles. I was finding it quite difficult to decide exactly what the best way of responding to a debate of this sort would be in terms of trying to bring together the vast range of different suggestions. Obviously I will not be able to answer every point that has been put to me in the course of the debate.

I thought that I might take not those five threads but instead the four core recommendations set out in the Hall/Pesenti review. The recommendations addressed improving access to data and dealing with the question of trust; skills, another issue that many noble Lords have dealt with; how we can maximise help for UK AI research; and support, by government and others, for the uptake of AI, which comes on to the questions of governance, ethics and so on. So I hope that with those four major groupings I will be able to deal with a number of questions that were put by noble Lords in the course of the debate.

I start with my noble friend Lord Holmes of Richmond, who talked about the need to make people feel part of the AI revolution and how we could, as he put it, avoid it being the next GM—something that should develop but to which some people have taken a rather negative approach. We agree that it is crucial that we engage with the public along with the new technology. I believe that the chilling effects, such as those referenced for GM, could limit the economic and social benefits. Public engagement should be a core function of the recently established Centre for Data Ethics and Innovation, about which I will say a little more later, along with the understanding that public perception and public acceptability will be core to the centre’s function to enable the maximisation of the benefits for all.

I turn to the question of the importance of skills. This issue was first raised by the noble Lord, Lord Hollick, but in moving on to the health service the noble Lord, Lord Kakkar, continued the point. It is important to improve access to skills. The changing nature of jobs is going to have an effect on other jobs as those jobs disappear. As some will be aware, a Deloitte analysis published in September last year found that we are in fact already adapting quite well to the effect of automation: from 2001 to 2015 there was a higher growth of jobs at low risk of automation than among those at high risk. Each new low-risk job pays considerably more on average than the high-risk job that it replaced, and that has added considerable funds —Deloitte estimated the figure at some £140 billion— to the UK economy.

AI is a new factor of production that could be used for labour substitution where labour is scarce, or to complement labour to produce higher-quality output. Obviously there will be a large number of professions and jobs that will need to evolve, while others could remain at high risk of displacement if they retain a high component of routine. That applies, as some noble Lords put it, to a number of professional jobs. Whether people are in the law or insurance, they will all need to change and adapt.

The Government are already offering a whole spectrum of skills packages, from the development of lifelong digital skills training plans through the Digital Skills Partnership to a revamped computer science curriculum in schools. Both my own department, BEIS, and the DCMS plan to work across the industry sector to support businesses to use AI more effectively and, in addition, to make the case for more flexible careers being more likely and beneficial to personal development.

I believe that in this area we can also compete internationally where it is necessary to bring in elements from abroad to accelerate innovation and advance the progress of AI. There are two very recent examples of that. First, at Davos my right honourable friend the Prime Minister announced a new partnership with the World Economic Forum on developing a framework for responsible procurement of AI in the public sector.

Secondly, at the Commonwealth Heads of Government Meeting, my right honourable friend the Prime Minister announced with Prime Minister Modi the new UK-India tech partnership to identify and pair businesses, venture capital, universities and others to provide access routes to markets for British and Indian entrepreneurs and small and medium-sized enterprises.

Turning to migration issues, I think it was the noble Lord, Lord Hollick, who asked about the number of tech visas. I give an assurance that we have doubled the number to about 2,000. That will certainly bring more into this country, but it will be kept under review by the Home Office.

On funding for research, the Government responded in some detail in paragraph 53 to recommendations 31 and 32. We made it clear that the artificial intelligence sector deal was just the first commitment from the Government to realise the technology’s potential, outlining a package of almost £0.95 billion for the sector. Further research funding of £1.6 billion for R&D, not all in AI, was announced in the recent Budget, helping us to meet our commitment to get R&D expenditure up to the level set out in the industrial strategy a little over a year ago.

We are confident that our strategy is building on a very strong baseline. We were recently ranked first in the Oxford Insights government AI readiness index, measuring innovation, availability of data, skills and regulatory landscape. We are already home to some of the biggest names in the business, such as DeepMind, which has been mentioned. We are certainly looking for more investment, but we are seeing a great deal of it. In the sector deal, we announced the investment, but future investment was announced in the recent Budget.

I turn to the question of ethics in AI, particularly in health, raised by the noble Lords, Lord Kakkar and Lord Reid, my noble friend Lady Rock and the right reverend Prelate the Bishop of Oxford. The crucial question is how we will address liability, including in health. We recognise the need to move forward on AI in an ethical and responsible manner. That is why we are establishing the centre for data ethics and innovation to advise on the governance of data and AI and to work with civil society, industry, the regulators and the public sector to strengthen their governance.

As noble Lords will be aware, we have closed our consultation on the scope for the centre and will shortly be publishing our response. We expect the centre to publish its operating strategy some time next spring. This will set out the themes and priorities for the centre. A core part of its remit will be to consider and scan the current regulatory landscape and advise the Government on gaps and improvements in data and AI.

In a rapidly changing industry and world, one must be aware of the danger of getting these things wrong. One is reminded of the introduction of the motor car, when Governments felt that they ought to regulate, thinking it best to put a man with a red flag walking in front of the motor car. Governments rapidly realised that that did not work and was rather impeding the development of that industry, and removed the man with the red flag. I hope that we can get the regulation, the ethics and everything else right. As the centre begins its work programme, we expect it to consider such issues and take them forward.

The noble Lord, Lord Freyberg, asked whether the recently published code of conduct should be made mandatory and how it should be taken forward. We launched it in September, and it is building on the Government’s data ethics framework. It is currently voluntary, with an ambition for companies to co-design the code. In parallel, the Government are keeping the regulatory landscape under review and will further consider the future of the code and how to enforce it as it progresses.

In the time available to me I am not sure that I can deal with many more questions. I want to answer the call of the noble Lord, Lord Browne of Ladyton, for the Government to launch an inquiry, in line with the report, into autonomous weapons. We continue to engage across government and internationally. At this stage, I would not want to go much further than that. I note what he says—I think he said that there would be an opportunity to discuss it later this week. I am sure that my right honourable friends in the Foreign Office will take note of that.

As I said in my opening remarks, it is very difficult to do justice to a report such as this in the short time I have. I think that the Government got five or six out of 10—or perhaps a little more, because the noble Lord, Lord Clement-Jones, is fairly generous—for our response to the report. We very much welcome the report, and I hope that he will welcome that response.

As I have set out, a great deal is happening at government and other levels. It is difficult to know quite how to respond on these occasions, but we have all reflected on how far we have come since the report was published in April. I believe wholeheartedly that we are on the cusp of an AI and data revolution that will change all our lives. Like my noble friend Lady Rock, I am one of the eternal optimists. I think it will change all our lives and communities for the better, and that this country is likely to be home to a thriving and vibrant AI sector, realising the vision that we have set out in the sector deal and in our response to the committee’s report—both encouraging investment and attracting the brightest minds.

Our ambition will not stop with that sector deal: it is only the beginning of the United Kingdom’s plans to be recognised as a place where ingenuity and entrepreneurship can continue to flourish, where technology follows the highest ethical standards and where the transformative potential of that technology is spread across the UK economy as widely as possible. With that, I thank the noble Lord for his report.

Photo of Lord Clement-Jones Lord Clement-Jones Liberal Democrat Lords Spokesperson (Digital) 6:39, 19 November 2018

My Lords, every Select Committee hopes for a debate as good as this one. The noble Lord, Lord Stevenson, pointed out the exceptional number of non-committee members who have taken part. That is a sign of the quality of today’s debate and the points made. Noble Lords showed expertise in so many different sectors: healthcare, defence, film, industry, financial services and the future. Not all noble Lords have recently published books on the future, but the contribution from the noble Lord, Lord Rees, was much appreciated.

Nearly all speakers emphasised the need for momentum in developing not only AI but the ethical frameworks that we need. Quite frankly, we are still in the foothills. The issue will become of greater importance as we combine it with all the other technologies such as the internet of things and blockchain. We need to be absolutely clear that our policy must be active. We must also have the means of scrutiny. I hope that the House will come back to this, perhaps in one of the other Select Committees, rather than an ad hoc one. As things move on so quickly in this area, we need to keep abreast of developments. The mantra that I repeat to myself, pretty much daily, is that AI should be our servant not our master. I am convinced that design, whether of ethics, accountability or intelligibility, is absolutely crucial. That is the way forward and I hope that, by having that design, we can maintain public trust. We are in a race against time and we have to make sure we are taking the right steps to retain that trust.

I thank all noble Lords for this debate. This is only the first chapter; there is a long road to come.

Motion agreed.