Public Authorities (Fraud, Error and Recovery) Bill – in a Public Bill Committee at 10:06 am on 18 March 2025.
“(1) The Secretary of State must, within six months of the passing of this Act, publish an Anti-Fraud and Error Technology Strategy.
(2) An Anti-Fraud and Error Technology Strategy published under this section must set out—
(a) how the Government intends to use automated technologies or artificial intelligence to tackle fraud against public authorities and the making of erroneous payments by public authorities, and
(b) a series of safeguards to provide for human oversight of decision making that meet the aims set out in subsection (3);
(c) how rights of appeal will be protected;
(d) a framework for privacy and data sharing.
(3) The aims of the safeguards in subsection (2)(b) are—
(a) to ensure that grounds for decision making can only be reasonable if they are the result of a process in which there has been meaningful human involvement by a human of adequate expertise to scrutinise any insights or recommendations made by automated systems,
(b) to make clear that grounds cannot be reasonable if they are the result of an entirely automated process, and
(c) to ensure that any information notice issued is accompanied by a statement—
(i) setting out the reasonable grounds for suspicion that have been relied on, and
I beg to move, that the clause be read a Second time.
The new clause would require the Secretary of State to publish an anti-fraud and error technology strategy within six months of the Act’s passage. That must include: how the Government intend to use automated technologies and AI to tackle fraud, subsection (2)(a); safeguards to ensure human oversight of decision making, subsection (2)(b); protection of rights of appeal, subsection (2)(c); and a framework for privacy and data sharing, subsection (2)(d).
Members might be asking themselves why we tabled the new clause. In part, it is based on the evidence we received. In written evidence, the Public Law Project expressed concern that, although the impact assessment, the human rights memorandum and the statements from the Secretary of State and the Minister for transformation, the hon. Member for Stretford and Urmston, on Second Reading state that a final decision on benefit eligibility will always involve a human agent, this is not reflected in the Bill itself. In response to the Public Law Project’s concerns, the new clause would provide an audit of technology systems used to tackle fraud, ensuring accountability while addressing the risks posed by automation in decision making.
A report published by the Treasury in 2023, “Tackling fraud and corruption against government”, said:
“Public bodies can better protect themselves…by sharing data and intelligence with other public bodies and working together.”
We therefore believe the technology strategy clause recognises that sharing data is beneficial to stopping and recovering fraud, but includes additional provisions that audit its use.
The strategy must include: how the Government intend to use automated technologies or artificial intelligence to tackle fraud and error against public bodies; what safeguards exist for human oversight of decision making; how rights of appeal will be protected; and a framework for privacy and data sharing.
The safeguards must ensure that grounds for decision making are reasonable only if they are the result of a process in which there has been meaningful involvement by a human of adequate expertise to scrutinise any insights or recommendations made by automated systems. They must also make it clear that grounds cannot be reasonable if they are the result of an entirely automated process. To ensure this, any information notice issued must be accompanied by a statement setting out the reasonable grounds for suspicion that have been relied on, and confirming that the conclusion has been formed on the basis of human involvement.
We know that AI and other technologies have huge potential to improve efficiency and productivity, and they should be used where appropriate, but we cannot rely on it yet to the exclusion of people and human judgment. The strategy we propose would ensure that those points were adequately considered by the Department, ensuring that the taxpayer receives value for money while safeguarding claimants through the decision-making process.
I thank the hon. Member for tabling the new clause. The Government recognise the opportunities that AI and machine learning can provide, while also understanding the need to ensure they are used safely and effectively. In January 2025, the Government outlined their response to the AI opportunities action plan led by Matt Clifford, which was commissioned by my right hon. Friend the Secretary of State for Science, Innovation and Technology. The plan outlined 50 recommendations for how the Government can leverage AI, including recommendations to improve access to data, to make better use of digital infrastructure and to ensure the safe use of AI.
Under the leadership of the Prime Minister and the Secretary of State for Science, Innovation and Technology, we have endorsed this plan, and the Government are taking forward those recommendations. As the Government work to implement the action plan’s recommendations, I do not believe that the separate anti-fraud and error technology strategy proposed by the new clause is necessary. I believe the new clause would cut across the work being taken forward under the action plan, so I reject the amendment.
As technology advances, the use of AI and machine learning will play a crucial role in detecting and preventing fraudulent activities. The Government want to make use of technology and data to tackle fraud, as the Department has a responsibility to ensure that fraud is minimised so that the right payments are made to the right people. The Government remain committed to building our AI capability, and at DWP we will take advantage of the opportunities offered by AI while ensuring it is used appropriately and safely.
Sorry, I should have said this earlier. The new clause would make the Government’s AI strategy a statutory requirement, instead of a manifesto commitment not written into law. That is important to us because, in the case of fraud and particularly benefit fraud, we are dealing with individual people. We want to make sure that we do not inadvertently penalise the wrong people or apply something that is disproportionate. A lot has been said about ensuring proportionality and reasonableness.
I am interested in the Minister’s reflections on where else in the strategy something is applied as personally to potentially vulnerable groups of people, thereby suggesting that we do not need this protection to ensure that people are not inadvertently penalised when we use this legislation to tackle the fraud they are committing.
That is a reasonable question, and clearly the AI framework is not specific to vulnerable groups in the way that the hon. Lady sets out. Decisions regarding benefit entitlement or payments within the Department are made by DWP colleagues who always look at the available information before making a decision. I would not want to make an amendment to restrict that to only the activity within this Bill; I would want it to be Departmental wide.
As I have set out a number of times at every stage and in every area of this Bill, a human is involved in decision making. There is no plan to change that. I can understand the hon. Lady’s anxiousness to see that set out in legislation, but I think it would create an anomaly between the practices within this Bill and in the Department more broadly. For instance, it is outside the scope of this Bill for a human to complete the vulnerability framework when looking at somebody in financial need who has an overpayment. I would not want to make a distinction between these powers and the rest of the Department's activities. If we were to have a broader debate, I would be happy to engage with the hon. Lady on that basis, but I would not want to create a “two-tier”, for want of a better word, description within the Department.
At every stage of model development, as we bring forward the AI opportunities action plan and our work in the AI and tech space, we ensure that checks, balances and strong safeguards are in place. I am proud of our commitment to use AI and machine learning in a safe and effective way.
To provide further assurances to Parliament and the public about our processes, we intend to develop fairness analysis assessments, which will be published alongside our annual report and accounts. These will set out the rationale for why we judge our models to be reasonable and proportionate. This reporting commitment on our fairness analysis assessment further negates the need for the new clause.
Finally, the hon. Lady mentioned the new clause’s role in ensuring reasonable grounds of suspicion when investigating fraud. I remind the Committee that, under the information gathering powers, the DWP may request information only where an authorised officer considers that there are reasonable grounds to suspect a DWP offence and that it is necessary and proportionate to obtain that information. Again, a human is fully baked into the process.
The changes made by the Bill will be reflected in the new code of practice. Updated mandatory training will be provided for staff, who will be accredited to use these new powers. Of course, with the eligibility verification measure in particular, but running throughout the Bill, the principle of independent oversight is very much in place. I hope that will provide the hon. Lady with the necessary information to show that the Government will use the information gathering powers only where there is a reasonable suspicion of fraud, and that this will have considerable human involvement. I agree that there is perhaps a broader conversation to be had about this at an appropriate time.
I beg to ask leave to withdraw the motion.