Publication of AI Regulation White Paper Consultation Response

Department for Science, Innovation and Technology written statement – made at on 6 February 2024.

Alert me about debates like this

Photo of Viscount Camrose Viscount Camrose Parliamentary Under Secretary of State (Department for Science, Innovation and Technology)

I am repeating the following Written Ministerial Statement made today in the other place by my Honourable Friend, the Secretary of State for Science, Innovation and Technology; Rt Hon Michelle Donelan MP.

Today, the Government is publishing our response to the consultation on the Artificial Intelligence (AI) regulation white paper: A pro-innovation approach to AI regulation.

The world is on the cusp of an extraordinary new era driven by advances in AI, which presents a once-in-a-generation opportunity for the British people to revolutionise our economy and transform public services for the better and to deliver real, tangible, long-term results for our country. The UK AI market is predicted to grow to over $1 trillion (USD) by 2035 – unlocking everything from new skills and jobs to once unimaginable life saving treatments for cruel diseases like cancer and dementia. That is why I have made it my ambition for the UK to become the international standard bearer for the safe development and deployment of AI.

We have been working hard to make that ambition a reality, and our plan is working. Last year, we hosted the world’s first AI Safety Summit, bringing industry, academia, and civil society together with 28 leading AI nations and the EU to agree the Bletchley Declaration, thereby establishing a shared understanding of the opportunities and risks posed by frontier AI.

We were also the first government in the world to formally publish our assessment of the capabilities and risks presented by advanced AI; and to bring together a powerful consortium of experts into our AI Safety Institute, committed to advancing AI safety in the public interest.

With the publication of our AI regulation white paper in March, we set out our initial steps to develop a pro-innovation AI regulatory framework. Instead of designing a complex new regulatory system from scratch, the white paper proposed five key principles for existing UK regulators to follow and a central function to ensure the regime is coherent and streamlined and to identify regulatory gaps or confusing overlaps. Our approach must be agile so it can respond to the unprecedented speed of development, while also remaining robust enough in each sector to address the key concerns around potential societal harms, misuse risks, and autonomy risk.

This common sense, pragmatic approach has been welcomed and endorsed both by the companies at the frontier of AI development and leading AI safety experts. Google DeepMind, Microsoft, OpenAI and Anthropic all supported the UK’s approach, as did Britain’s budding AI start-up scene, and many leading voices in academia and civil society such as the Centre for Long Term Resilience and the Centre for Governance of AI.

Next steps on establishing the rules for governing AI

Since we published the white paper, we have moved quickly to implement the regulatory framework. We are pleased that a number of regulators have already taken steps in line with our framework such as the Information Commissioner's Office, Office for Nuclear Regulation, and Competition and Markets Authority.

We have taken steps to establish the central function to drive coherence in our regulatory approach across government, starting by recruiting a new multidisciplinary team to conduct cross-sector assessment and monitoring to guard against existing and emerging risks in AI.

Further to this, we are strengthening the team working on AI within the Department of Science, Innovation and Technology across the newly established AI Policy Directorate and the AI Safety Institute. In recognition of the fact that AI has become central to the wider work of DSIT and government, we will no longer maintain the branding of a separate “Office for AI”. Similarly, the Centre for Data Ethics and Innovation (CDEI) is changing its name to the Responsible Technology Adoption Unit to more accurately reflect its mission. The name highlights the directorate’s role in developing tools and techniques that enable responsible adoption of AI in the private and public sectors, in support of DSIT’s central mission.

In September we also announced the AI and Digital Hub a pilot scheme for a brand-new advisory service run by expert regulators in the Digital Regulation Cooperation Forum. It will be laser-focused on helping companies get to grips with AI regulations so they can spend less time form-filling and more time getting their cutting-edge products from the lab onto the market and into British people’s lives.

Building on the feedback from the consultation, we are now focused on ensuring that regulators are prepared to face the new challenges and opportunities that AI can bring to their domains. This consultation response presents a plan to do just that. It sets out how we are building the right institutions and expertise to ensure that our regulation of AI keeps pace with the most pressing risks and can unlock the transformative benefits these technologies can offer.

To drive forward our plans to make Britain the safest and most innovative place to develop and deploy AI in the world, the consultation response announces over £100 million to support AI innovation and regulation. This includes a £10 million package to boost regulator’s AI capabilities, helping them develop practical tools to build the foundations of their AI expertise and ability to address risks in their domain.

We are also announcing a new commitment by UKRI that future investments in AI research will be leveraged to support regulator skills and expertise. Further to this, we are announcing a nearly £90 million boost for AI research, including £80 million through the launch of nine new research hubs across the UK and a £9 million partnership with the US on responsible AI as part of our International Science Partnership Fund. These hubs are based in locations across the country and will enable AI to evolve and tackle complex problems across applications, from healthcare treatments to power-efficient electronics.

In addition, we are announcing £2 million of Arts & Humanities Research Council (AHRC) funding to support research that will help to define responsible AI across sectors such as education, policing and creative industries.

In the coming months, we will formalise our regulator coordination activities by establishing a steering committee with government representatives and key regulators. We will also be conducting targeted consultations on our cross-sectoral risk register and monitoring and evaluation framework from spring to make sure our approach is evidence-based and effective.

We are also taking steps to improve the transparency of this work, which is key to building public trust. To this end, we are also calling on regulators to publicly set out their approaches to AI in their domains by April 2024 to increase industry confidence and ensure the UK public can see how we are addressing the potential risks and benefits of AI across the economy.

Adapting to the challenges posed by highly capable general-purpose AI systems

The challenges posed by AI technologies will ultimately require legislative action across jurisdictions, once understanding of risk has matured. However, legislating too soon could stifle innovation, place undue burdens on businesses, and shackle us from being able to fully realise the enormous benefits AI technologies can bring. Furthermore, our principles-based approach has the benefit of being agile and adaptable, allowing us to keep pace with this fast-moving technology.

That is why we established the AI Safety Institute (AISI) to conduct safety evaluations on advanced AI systems, drive foundational safety research, and lead a global coalition of AI safety initiatives. These insights will ensure the UK responds effectively and proportionately to potential frontier risks.

Beyond this, the AISI has built a partnership network of over 20 leading organisations allowing AISI to act as a hub, galvanising safety work in companies and academia; Professor Yoshua Bengio, as Chair, is leading the UK’s the International Scientific Report on Advanced AI Safety which brings together 30 countries, including the EU and UN; and is continuing its regular engagement with leading AI companies that signed up to the Bletchley Declaration.

In the consultation response, we build on our pro-innovation framework and pro-safety actions by setting out our early thinking on future targeted, binding requirements on the developers of highly capable general-purpose AI systems. The consultation response also sets out the key questions and considerations we will be exploring with experts and international partners as we continue to develop our approach to the regulation of the most advanced AI systems.

Driving the global conversation on AI governance

Building on the historic agreements reached at the AI Safety Summit, today we also set out our broader plans regarding how the UK will continue to drive the global debate on the governance of AI.

Beyond our work through the AI Safety Institute, this includes taking a leading role in multilateral AI initiatives such as the G7, OECD, and the UN, and deepening bilateral relationships building on the success of agreements with the US, Japan, Republic of Korea, and Singapore.

This response paper is another step forward for the UK’s ambitions to lead in the safe development and deployment of AI. The full text of the white paper consultation response can be found on