I too thank the noble Lord, Lord Gilbert, for his heroic chairing of this lengthy and complicated inquiry. I also thank the clerk, Theo Pembroke, and the specialist adviser, Professor Murray, for gathering a distinguished array of witnesses and shaping this report, of which I and other members of the committee are justifiably proud.
I want to concentrate my comments tonight on chapter 3, on ethical technology. The committee put some energy into understanding the role of algorithms in the digital world and the problems that might arise from unregulated artificial intelligence. I have been particularly struck by the evidence given by witnesses such as Professor John Naughton, who told us that the wider community, including government and industry, were dazzled by technology. He warned:
“We always have to be prepared to apply to it the standard levels of human scepticism that we apply to everything”.
In the report, we raised the awareness of the many concerns surrounding AI decision-making. The committee responded with recommendation 6, calling on the Information Commissioner’s Office to set out rules for the use of algorithms in accordance with the principles laid out in chapter 2. We also recommended that the ICO publish a code of practice on the use of algorithms.
The GDPR is supposed to ensure that any data processing is transparent, fair, avoids bias and discrimination. The Data Protection Act, passed last year in May, enacts these requirements in English law. Yet, despite the DPA, recent surveys show that people are still concerned about the use of algorithms. They are worried by what kind of data is selected to influence the algorithmic decision, the accuracy of the algorithms being used and whether they are fair and not affected by bias and discrimination.
The ICO’s interim report, Project ExplAIn, published last week, attempts to lay the basis for ethical guidelines in AI decision-making. It explains that there is a distrust by many digital organisations of transparency in AI decisions. They fear it may lead to breaching commercial sensitivities, infringing third-party data and their programmes being gamed by users. However, these concerns need to be set against individuals’ requirements for organisations to give appropriate detailed explanations of AI decision-making. The report suggests that there is space to help bridge this divide and help organisations to foster a culture of informed and responsible approaches to innovation in AI technologies.
This work sounds like a good basis for the ICO to publish draft guidelines on ethical designs in July, with final publication in October. These will go a long way to ensuring that there is improvement in the accountability of AI decision-making. I encourage the Government to ensure that these guidelines are in line with the principles set out in the report. Even so, they will be only guidelines. However well thought out they might be, I fear the digital world will always harbour organisations and individuals who do not want to abide by them.
The GDPR is limited. Article 22 of the GDPR and Section 14 of the Data Protection Act adopt suitable measures to safeguard individuals when using solely automated decisions. This allows data subjects to appeal against an AI decision only when it is fully automated and there is no human involved. However, once human involvement in this decision is determined, the data subject cannot appeal. As many AI decisions are augmented by human intervention, this seems to be a loophole. Do the Government plan to plug this loophole and ensure that relevant legislation is brought forward to deal with any potential problem arising from this?
Ethical design is also relevant to my other great concern, raised in the report in paragraph 82, under the heading “Capturing attention”. It points out that digital companies are driven by the commercial imperative to seek and retain users’ attention. The EU Competition Commissioner, Margrethe Vestager, warns that this can lead to a form of addiction. On Monday, Barnardo’s issued a report expressing concern that children’s early access to electronic devices could lead to both addiction and a loss of key social skills as families spend less time talking to each other. This could cause the children problems with mental health and emotional well-being. The committee’s report anticipates these concerns and recommends that digital service providers, including entertainment and games platforms, record time spent using their service and give users reminders of extended use through pop-up notices.
In the debate on the online harms White Paper on April 30, I said that I was concerned that this problem was not being taken seriously by the Government. The White Paper says that the CMO’s review, which covered online gaming and internet addiction, did not find evidence of a causal relationship between screen-based activities and mental health problems. The White Paper shockingly concludes that the evidence did not support the need for parental guidelines or requirements for companies to behave responsibly in this area.
This lack of action is made particularly serious by the failure to confront the growing problem of gaming addiction, which affects so many young people, especially young men. Policymakers and psychologists across the developed world see this as an issue that needs to be addressed now. However, the White Paper almost ignores it.
In his reply to my April speech, the Minister said:
“I completely agree with what was said about the resistance of the gaming sector, in particular, to engage with this issue”.—[Official Report, 30/4/19; col. 933.]
He gave me his support, for which I was very grateful. It is now six weeks later. Can the Minister give me some assurance that the Government are working to ensure that the gaming industry’s resistance to dealing with gaming addiction will be seriously addressed? Failure to confront this issue quickly and comprehensively will lay the foundations of social and mental problems for generations to come.