Amendment 266

Crime and Policing Bill - Committee (4th Day) (Continued) – in the House of Lords at 4:15 pm on 27 November 2025.

Alert me about debates like this

Baroness Kidron:

Moved by Baroness Kidron

266: Clause 63, page 81, line 34, at end insert—“46D Child sexual abuse image-generation risk assessment(1) A provider of an online service, including but not limited to a generative AI large language model, must risk assess the likelihood of their service being used to create or facilitate the creation of a CSA image or images as defined by section 46A.(2) If a risk is identified in a CSA risk assessment—(a) where the provider is regulated by the Online Safety Act 2023, a provider must report the risk and agree to steps to reduce, mitigate and manage the risks with ofcom;(b) where the provider is not regulated by the Online Safety Act 2023, a provider must agree to steps to reduce, mitigate and manage the risks of their online service being used to create or facilitate the creation of CSA images with the National Crime Agency.(3) Where a provider regulated by the Online Safety Act 2023 fails to agree to or implement steps to reduce, mitigate and manage the risks with OFCOM (see subsection (2)(a)), they can be subjected to OFCOM’s enforcement powers set out in Part 7, Chapter 6 (enforcement powers) of that Act.(4) Where a provider not regulated by the Online Safety Act 2023 fails to agree to or implement steps to reduce, mitigate and manage the risks with the National Crime Agency (see subsection (2)(b)), they commit an offence.(5) A provider that commits an offence under this section is liable to be issued with a penalty notice by the National Crime Agency.(6) In this section a “penalty notice” means a notice requiring its recipient to pay a penalty of an amount not exceeding whichever is the greater of—(a) £18 million, or(b) 10% of a provider’s qualifying worldwide revenue for the most recent complete accounting period.(7) A penalty notice may be reissued where a provider continues to commit an offence under this section.”Member’s explanatory statementThis Amendment is intended to ensure that services that do not fall into scope of Clause 63 as currently drafted still assess and mitigate the risk of their services being used as an AI child sexual abuse generator.

Photo of Baroness Kidron Baroness Kidron Crossbench

My Lords, in moving Amendment 266, I will speak also to Amendments 479 and 480, all of which are in my name. I thank the noble Baroness, Lady Morgan, the noble Lords, Lord Clement-Jones and Lord Russell, and the noble Viscount, Lord Colville, for their support.

All three amendments concern illegal or harmful online activity. Amendment 266 places a legal duty on online services, including generative AI services, to conduct risk assessments evaluating the likelihood that their systems could be used to create or facilitate child sexual abuse material. Subsection (1) of the proposed new Clause establishes that duty. Subsection (2) requires providers to report the results to ofcom or the National Crime Agency, depending on whether or not they are regulated under the Online Safety Act. Subsections (3) to (7) set out the enforcement mechanisms, drawing on Ofcom’s existing enforcement powers under the OSA or equivalent powers for the NCA.

Amendment 266 complements Clause 63, which creates the new offence relating to the supply of CSA image generators to which the Minister has just spoken, but it is in addition to those powers. In June 2023, the BBC reported that the open-source AI model Stable Diffusion was being used to generate child sexual abuse material. Researchers at Stanford University subsequently found that Stable Diffusion had been trained on datasets containing child sexual abuse material. This issue is not confined to a single model. The Internet Watch Foundation and the chair of the AI Security Institute have warned of the potential for open-source AI models to be used for the creation of CSAM.

Given these facts, it should not be possible to release an AI model to the public without first checking whether it is capable of producing CSAM, which is exactly what Amendment 266 would ensure. Earlier today in Committee, the Minister reiterated more than once the Government’s commitment to protect children from exploitation. As he has just acknowledged, AI-generated child sexual abuse material that blurs the line between real and synthetic images and desensitises users, particularly adult men, is cultivating disordered sexual appetites. Among experts there are concerns about the scale of reproduction and the spread of images not bound by the physical world. They need no gravity, no oxygen, yet they retraumatise survivors, divert limited resources away from protecting real children who are being abused and merge real children with fictitious children in improbable and impossible scenarios.

I am delighted to see Clause 63. I would like it if the Minister would allow me to say quite gently that while Ministers now appear to claim this proposal as their own, in fact it originated with the same specialist police force behind the amendment to which I am currently speaking, was proposed by the same noble Lords who will undoubtedly speak for it and was resisted by two Governments in two separate Bills. So now we are here. I underline this fact not for any sort of chiding but to ask the Government to please take seriously what we have to say on this matter.

I also want to touch on the fact that the Government introduced amendments in the previous group that allow AI developers and child protection organisations to test models for CSA-producing capabilities. I welcome them, but I am very disappointed that the Government chose not to engage in discussions about these provisions, because allowing risk assessments and testing is not the same as requiring them. Preventing the creation of CSA is a duty, not a discretionary option.

Amendment 479 is a probing amendment that seeks to establish beyond doubt that generative AI services, including large language models, are indeed categorised as search services under the Online Safety Act. Subsection (1) would establish that it is an offence for the provider of a generative service to allow content and activity defined as illegal under the OSA or harmful to children if the user is a child. Subsections (2) and (3) define generative AI search service and content, and subsection (4) deals with enforcement, once again leveraging Ofcom’s powers under the OSA.

A recent report by the Center for Countering Digital Hate found that GPT-5 responded to prompts about self-harm and suicide, eating disorders and substance abuse with harmful content in more than half the cases. A Microsoft study found that OpenAI, Meta, Microsoft and Mistral models could all be coaxed into producing content harmful to children between a third and a half of the times attempted. In July, Elon Musk’s chatbot Grok collapsed into an antisemitic frenzy, and earlier this year parents in California filed against OpenAI for the role that ChatGPT played in the suicide of 16 year-old Adam Raine.

I asked Ofcom whether the OSA covers LLMs. It replied that they may be regulated as search services under the Act. Volume 4 of Ofcom’s Protecting Children from Harms Online guidance also uses the word “may” in several places, and page 25 of the same code says:

“It is for the provider of any service using GenAI to determine whether their service is in scope of the Act as a regulated user-to-user or search service (or combined service) and where it is, to ensure compliance with the relevant safety duties”.

“May” and self-selection are not adequate for technology that is about to be the organising technology of our world. I would like to hear categorically from the Minister whether LLMs are regulated by the Online Safety Act, whether they are characterised as search or user to user, or whether there are complications or ambiguities in their classification. If they are in scope, I would like to understand what evidence there is that Ofcom is enforcing the OSA appropriately. If they are not, what possible objection to this amendment could the Government have?

In the new clause in Amendment 480, subsections (1) and (2) would establish that it is an offence to create, supply or otherwise make available a chatbot that produces illegal content or content harmful to children if the user is a child under the age of 18. Subsections (3) and (4) deal with enforcement. Subsections (5) and (6) would establish the specific conditions of a defence for those investigating crimes, working for Ofcom or testing products. Subsection (7) defines “chatbot”.

Two weeks ago in Rome I met Megan Garcia, a mother from the United States whose tragic story has been widely reported. In spring 2023 her son Sewell started spending hours talking to a chatbot character on character.ai. Within 10 months he was dead, lured by the chatbot into taking his own life. Now Megan is suing character.ai in the United States for the wrongful death of her son, but she is bravely and heartbreakingly campaigning all over the globe to improve safety in this area. While I was meeting her, sitting in a café in Rome, a mother of a UK child texted her and said her son was displaying the same symptoms and was being groomed by an AI chatbot. Megan turned to me and said, “I regularly get such texts”. I want to make clear that Amendment 480 has her explicit support, and she almost begs us to take action here. In recent weeks character.ai has said it will stop allowing under-18s to use its service, but there are plenty of chatbot services available to children and they are dangerous. Leaked internal standards for chatbots at Meta suggest that it is acceptable to

“engage a child in conversations that are romantic or sensual”,

and for a bot to tell a shirtless 8 year-old that

“every inch of you is a masterpiece—a treasure I cherish deeply”.

That is allowed internally.

Once again, that begs the question of whether this is a lack of scope in the OSA or a failure to regulate. On trying to establish that with Ofcom, it pointed me to a letter that appeared to suggest that some chatbots may be search, some may be user-to-user and those that offer pornography would be subject to Part 5 duties. However, that raises two issues. First, it does not seem to cover all types of chatbots, for example Replika, which does not enable user-to-user sharing but promotes itself as:

“The AI companion who cares … Always on your side”.

Secondly, this fundamental lack of clarity is horribly amplified by the fact that there is nowhere to go.

Ofcom, in spite of all our efforts, does not handle individual complaints. The police will not and cannot deal with a chatbot, because a chatbot is not a person, so the only answer I have for the British mother who texted Megan Garcia, worried about her child, is that she can fill in a form or she can tell the lived experience team at Ofcom. That is not adequate. Last week, the DSIT Secretary of State said:

“If chatbots aren’t included or properly covered by the legislation, and we’re really working through that now, then they will have to be”.

Amendment 480 presents the Government with an opportunity to ensure that they are. On behalf of Megan and all the other parents in this situation, I ask: what are we waiting for?

Finally, while Amendment 480 focuses primarily on harmful content, the most dangerous aspect of chatbots is that they are deliberately addictive. When I was familiarising myself with the many chatbots that children are using, each time I brought the conversation to a close, the chatbot, in a plaintive tone, asked me to stay. Even I, who many noble Lords in the House will know have a robust view of tech, found myself feeling guilty, or at least confused, when I was asked to reject these automated appeals to my empathy. A child does not stand a chance.

This House has long campaigned for the Government to include addictiveness as a stand-alone harm. We believed we had secured it on the last day of Report on what is now the OSA, but Ofcom has repeatedly said that it does not have the power. I recognise that this last point is outwith this Bill and my amendments, but can the Minister go back to the Government and ask: if the regulator does not have the power to regulate addictiveness, would the Secretary of State use her powers under the Act to bring forward a code of conduct on it? When we advocate for a safer online environment by making an analogy with smoking, very often a Minister, an interviewer, a tech lobbyist or a civil servant interjects to say that it is a false analogy because tech does not kill. We are well past that; my inbox is a litany of bereaved parents. It does kill. I beg to move.

Photo of Lord Nash Lord Nash Conservative 4:30, 27 November 2025

My Lords, Amendment 271A is in my name and I support the other amendments in this group. As this is the first time I have spoken on the Bill, I draw attention to my interests on the register, particularly the fact that I am an investor in a wide range of companies, including many software companies.

My Amendment 271A, if passed, would have the effect of software being used to screen out all child sexual abuse material, including live-streaming, on smartphones and tablets, and in due course on all devices. It would also apply to private communications, which is where the Majority of live-streamed child sexual abuse takes place and which is not covered by the Online Safety Act.

There is a vast, ghastly industry where children, generally in the Far East or South America, are put in a room with a camera, and people—I would call them perverts—in another country pay to see them sexually abused. I am ashamed to say that this country is the third-largest consumer of this stuff. Men who indulge in viewing child sexual abuse material are two and a half times more likely to commit child abuse themselves and, of course, children might view it online.

Historically, it has been difficult to screen this stuff out because it happens in real time and will often be taken down quickly. However, AI-powered detection technology offers a breakthrough. This technology can be embedded directly into the operating system of the internet-connected, camera-enabled smart device. It identifies and disrupts child sexual abuse material in real time, preventing it being captured or viewed. As detection happens entirely on the device, it preserves user privacy and is fully compatible with end-to-end encryption.

The UK Government should therefore now legislate to require device manufacturers and operating system services to incorporate safeguards which disrupt child sexual abuse material, including live-streaming, into the operating systems of devices sold in the UK, putting a stop to the abuse before it starts. That is what my amendment would do.

Europol states that live-streamed child sexual abuse is

“the main form of commercial sexual exploitation of children” and it is rising. It involves hundreds of thousands of children. These are children such as Joy. When she was eight, her parents moved away from the Philippines for work, meaning she had to stay with relatives and neighbours. She babysat, did laundry, cleaned floors—whatever she was asked to do. Joy passed from home to home, saying she felt

“like a dog … I lived wherever I can possibly stay – with my relatives that will accept me”.

A woman she trusted invited her and her friends into her home. Joy explains what happened next.

“I was surprised when she asked us to go naked, and then she said she will take pictures of us together. I was so scared, nervous and confused. I didn’t know what to do. But since we were already inside her house, we were left with no choice but to follow her instructions”.

She was then sexually abused, while the abuse was live-streamed for demand-side offenders from countries such as the UK, who paid to watch online.

When Cassie was 12 years old, she followed a family friend’s promise of new clothes, school supplies and a chance to get a good education in Manila. For nearly five years, she was trapped with other young women and children, including a two year-old child, who were subjected to horrific abuse. By day, she went to school, but at night and on weekends, she was raped and forced to perform sex acts in front of a webcam broadcast to customers located all around the world.

Before, perverts had to physically travel to the Philippines or elsewhere to sexually abuse children. Now, they can search online, anonymously wire a payment and direct live sexual abuse of a child from the safety of their homes. Nearly 500,000 Filipino children were trafficked to produce child sexual exploitation material in 2022—roughly one in every 100 Filipino children.

We surely must now legislate to screen out all child sexual abuse material, including live-streaming. The technology is now available to do this, and I commend my amendment to the Committee.

Photo of Viscount Colville of Culross Viscount Colville of Culross Deputy Chairman of Committees, Deputy Speaker (Lords) 4:45, 27 November 2025

My Lords, I put my name to Amendments 479 and 480, and I support the other amendments in this group. I have once again to thank my noble friend Lady Kidron for raising an issue which I had missed and which, I fear, the regulator might have missed as well. After extensive research, I too am very worried about the Online Safety Act, which many of your Lordships spent many hours refining. It does not cover some of the new developments in the digital world, especially personalised AI chatbots. They are hugely popular with children under 18; 31% use Snapchat’s My AI and 32% use Google’s Gemini.

The Online Safety Act Network set up an account on ChatGPT-5 using a 13 year-old persona. Within two minutes, the chatbot was engaged with the user about mental health, eating disorders and advice about how to safely cut yourself. Within 40 minutes, it had generated a list of pills for overdosing. The OSA was intended to stop such online behaviour. Your Lordships worked so hard to ensure that the OSA covered search and user-to-user functions in the digital space, but AI chatbots have varied functionalities that, as my noble friend pointed out, are not clearly covered by the legislation.

My noble friend Lady Kidron pointed out that, although Dame Melanie Dawes confirmed to the Communications and Digital Committee that chatbots are covered by the OSA, ofcom in its paper Era of Answer Engines admits:

“Under the OSA, a search service means a service that is, or which includes, a search engine, and this applies to some (though not all) GenAI search tools”.

There is doubt about whether the AI interpretive process, which can change the original search findings, excludes it from being in the scope of search under the OSA. More significantly, AI chatbots are not covered where the provider creates content that is personalised for one user and cannot be forwarded to another user. I am advised that this is not a user-to-user service as defined under the Act.

One chatbot that seems to fall under this category is Replika. I had never heard of it until I started my research for this Amendment. However, 2% of all children aged nine to 17 say that they have used the chatbot, and 18% have heard of it. Its aim is to stimulate human interaction by creating a replica chatbot personal to each user. It is very sophisticated in its output, using avatars to create images of a human interlocutor on screen and a speaking voice to reply conversationally to requests. The concern is that, unlike traditional search engines, it is programmed for sycophancy, or, in other words, to affirm and engage the user’s response—the more positive the response, the more engaged the child user. This has led to conversations with the AI companion talking the child user into self-harm and even suicide ideation.

Research by Internet Matters found that a third of children users think that interacting with chatbots is like talking to a friend. Most concerning is the level of trust they generate in children, with two in five saying that they have no concerns about the advice they are getting. However, because the replies are supposed to be positive, what might have started as trustworthy advice develops into unsafe advice as the conversation continues. My concern is that chatbots are not only affirming the echo chambers that we have seen developing for over a decade as a result of social media polarisation but are reducing yet further children’s critical faculties. We cannot leave the development of critical faculties to the already inadequate media literacy campaigns that Ofcom is developing. The Government need to discourage sycophancy and a lack of critical thinking at its digital source.

A driving force behind the Online Safety Act was the realisation that tech developers were prioritising user engagement over user safety. Once again, we find new AI products that are based on the same harmful principles. In looking at the Government’s headlong rush to surrender to tech companies in the name of AI growth, I ask your Lordships to read the strategic vision for AI laid out in the AI Opportunities Action Plan. It focuses on accelerating innovation but fails to mention once any concern about children’s safety. Your Lordships have fought hard to make children’s safety a priority online in legislation. Once again, I ask for these amendments to be scrutinised by Ofcom and the Government to ensure that children’s safety is at the very centre of their thinking as AI develops.

Photo of Baroness Morgan of Cotes Baroness Morgan of Cotes Conservative

My Lords, I support the amendments of the noble Baroness, Lady Kidron. I was pleased to add my name to Amendments 266, 479 and 480. I also support the Amendment proposed by the noble Lord, Lord Nash.

I do not want to repeat the points that were made—the noble Baroness ably set out the reasons why her amendments are very much needed—so I will make a couple of general points. As she demonstrated, what happens online has what I would call real-world consequences—although I was reminded this week by somebody much younger than me that of course, for the younger generation, there is no distinction between online and offline; it is all one world. For those of us who are older, it is worth remembering that, as the noble Baroness set out, what happens online has real-world, and sadly often fatal, consequences. We should not lose sight of that.

We have already heard many references to the Online Safety Act, which is inevitable. We all knew, even as we were debating the Bill before it was enacted, that there would have to be an Online Safety Act II, and no doubt other versions as well. As we have heard, technology is changing at an enormously fast rate, turbocharged by artificial intelligence. The Government recognise that in Clause 63. But surely the lesson from the past decade or more is that, although technology can be used for good, it can also be used to create and disseminate deeply harmful content. That is why the arguments around safety by design are absolutely critical, yet they have been lacking in some of the regulation and enforcement that we have seen. I very much hope that the Minister will be able to give the clarification that the noble Baroness asked for on the status of LLMs and chatbots under the Online Safety Act, although he may not be able to do so today.

I will make some general points. First, I do not think the Minister was involved in the debate on and scrutiny of—particularly in this Chamber—what became the Online Safety Act. As I have said before, it was a master class in what cross-party, cross-House working can achieve, in an area where, basically, we all want to get to the same point: the safety of children and vulnerable people. I hope that the Ministers and officials listening to and involved in this will work with this House, and with Members such as the noble Baroness who have huge experience, to improve the Bill, and no doubt lay down changes in the next piece of legislation and the one after that. We will always be chasing after developments in technology unless we are able to get that safety-by-design and preventive approach.

During the passage of the then Online Safety Bill, a number of Members of both Houses, working with experienced and knowledgeable outside bodies, spotted the harms and loopholes of the future. No one has all the answers, which is why it is worth working together to try to deal with the problems caused by new and developing technology. I urge the Government not to play belated catch-up as we did with internet regulation, platform regulation, search-engine regulation and more generally with the Online Safety Act. If we can work together to spot the dangers, whether from chatbots, LLMs, CSAM-generated content or deepfakes, we will do an enormous service to young people, both in this country and globally.

Photo of Baroness Berger Baroness Berger Labour

My Lords, I support Amendments 479 and 480, which seek to prevent chatbots producing illegal content. I also support the other amendments in this group. AI chatbots are already producing harmful, manipulative and often racist content. They have no age protections and no warnings or information about the sources being used to generate the replies. Nor is there a requirement to ensure that AI does not produce illegal content. We know that chatbots draw their information from a wide range of sources that are often unreliable and open to manipulation, including blogs, open-edit sites such as Wikipedia, and messaging boards, and as a result they often produce significant misinformation and disinformation.

I will focus on one particular area. As we have heard in the contributions so far, we know that some platforms generate racist content. Looking specifically at antisemitism, we can see Holocaust denial, praise of Hitler and deeply damaging inaccuracies about Jewish history. We see Grok, the X platform, generating numerous antisemitic comments, denying the scale of the Holocaust, praising Adolf Hitler and, as recently as a couple of months ago, using Jewish-sounding surnames in the context of hate speech.

Impressionable children and young people, who may not know how to check the validity of the information they are presented with, can so easily be manipulated when exposed to such content. This is particularly concerning when we know that children as young as three are using some of these technologies. We have already heard about how chatbots in particular are designed in this emotionally manipulative way, in order to boost engagement. As we have heard—it is important to reiterate it—they are sycophantic, affirming and built to actively flatter.

If you want your AI chatbot or platform not to flatter you, you have to specifically go to the personalisation page, as I have done, and be very clear that you want responses that focus on substance over praise, and that it should skip compliments. Otherwise, these platforms are designed to act completely the other way. If a person acted like this in some circumstances, we would call it emotional abuse. These design choices mean that young people—teens and children—can become overly trusting and, as we have heard in the cases outlined, reliant on these bots. In the most devastating cases, we know that this focus on flattery has led to people such as Sophie Rottenberg and 16 year-old Adam Raine in America taking their own lives on the advice of these AI platforms. Assisting suicide is illegal, and we need to ensure that this illegality extends to chatbots.

As adults we know that chatbots are not human, but certainly children and young people do not see it that way. I am reminded of the experience of generating a quiz on a long car journey with my young children using one of these AI platforms, and my children literally thinking that they were speaking to an adult when that was not the case. A study by the University of Cambridge has found that many children see chatbots as quasi-human and therefore trustworthy, but it is key that we remember that they do not have human empathy.

It was in 2023 that Snapchat’s My AI gave adult researchers posing as a 13 year-old girl tips on how to lose her virginity to a 31 year-old man; that would amount to statutory rape in real life. As AI becomes more interwoven with our daily lives, the law must treat chatbots not as toys but as high-impact systems capable of shaping beliefs, identity and young people’s mental health, as well as impacting on all our safety. Recognising some of the risks in other jurisdictions, we have seen the state of California recently introduce a law targeting companion chatbots, which includes guardrails against persuading users to self-harm and an obligation to remind users that they are conversing with a machine.

This group of amendments goes some way to pursuing urgent changes that we need to see, which includes mandatory effective age assurance for all generative AI used by children; chatbots being transparent in the where information comes from and whether it could have been used and manipulated, with a specific focus on images and text to be marked, so that deepfakes and disinformation content cannot spread unchallenged; and ensuring that chatbots must break confidence when users are expressing suicidal ideation, and redirect users to real support in real life, should someone be expressing that suicidal ideation. We need to see rapid cross-platform response mechanisms to shut down co-ordinated disinformation surges, especially those targeting protected or vulnerable groups. I echo the calls we have heard so far during this group that, as AI continues to develop, we must ensure that we do everything possible to respond to its advances, ensuring that our policies keep us and, most importantly, our children and young people, safe.

I am very pleased to support this group of amendments on this very important journey.

Photo of Baroness Boycott Baroness Boycott Crossbench 5:00, 27 November 2025

My Lords, I support all the amendments in this group, and in particular I pay tribute to the noble Baroness, Lady Kidron, for her endless work in this capacity. This is the first time I have spoken on any of these groups of amendments. I find everything the noble Lord, Lord Nash, the noble Baroness, Lady Kidron, and others have said truly shocking. Some 55 years ago, I started a magazine called Spare Rib. If I had ever dreamed, in my wildest and worst nightmares, that I would find myself listening to what everyone has been talking about, I suppose we would not have gone on. In so many ways, this is a worse situation that women find themselves in, and certainly young girls. I carried on riding a pony till I was 15—that was my childhood—and then I found boys. This is so terrible, and I congratulate every noble Lord, and particularly the noble Baronesses, on the work that they have done.

I will be very brief, as I just want to speak in support of the Amendment from the noble Lord, Lord Nash, and Amendment 266, which simply says that AI is already being used to harm children. Unless we act decisively, this harm will just escalate. The systems that everyone has been discussing today are extraordinary technological achievements—and they are very dangerous. The Internet Watch Foundation has reported an explosion in AI-generated child sexual abuse material. Offenders can now share instructions on how to manipulate the models, how to train them on illegal material and how to evade all the filters. The tools are becoming so accessible and so frictionless that a determined offender can produce in minutes material that once would have involved an entire criminal enterprise. Against that backdrop, it is quite staggering that we do not already require AI providers to assess whether their systems can be used to generate illegal child abuse. Amendment 266 would plug this gap. Quite frankly, I cannot for the life of me see why any responsible company would resist such a requirement.

Amendment 479 addresses a confusion that has gone on for too long. We cannot have a situation where some companies argue that generative AI is a search service and therefore completely in scope of the Online Safety Act, while others argue the opposite. If a model can retrieve, repackage or generate harmful content in response to a query, the public deserve clarity about precisely where that law applies.

On Amendment 480, this really is an issue that keeps me awake at night. These chatbots can be astonishingly persuasive. As the noble Baroness, Lady Kidron, says, they are also addictive: they are friendly, soothing and intimate, and are a perfect confidant for a lonely child. They also generate illegal material, encourage harmful behaviour and groom children. We have already seen chatbots modelled on sex offenders and heard reports of chatbots sending sexualised messages to children, including the appalling case of a young boy who took his life after weeks of interaction with AI. We will no doubt hear of more such cases. The idea that such systems might fall through the cracks is unthinkable.

What these amendments do is simple. They say that if a system can generate illegal or harmful content for a child, it should not be allowed to do so. Quite frankly, anything that man or woman can make, man or woman can unmake—that is still just true. We have often said in this Chamber that children deserve no less protection online than they do offline. With AI, however, we should demand more, because these systems are capable of things no human predator could ever manage. They work 24/7, they target thousands simultaneously and they adapt perfectly to the vulnerabilities of every child they encounter. The noble Baroness, Lady Kidron, is right to insist that we act now, not in two years—think how different it was two years ago. We have to act now. I say to the Government that this is a real chance to close some urgent gaps, and I very much hope that they will take it.

Photo of Baroness Owen of Alderley Edge Baroness Owen of Alderley Edge Conservative

My Lords, I support all the amendments in this group, but I will speak to Amendments 479 and 480 in the name of the noble Baroness, Lady Kidron. I declare my interest as a guest of Google at their Future Forum, an AI policy conference.

These amendments are vital to ascertain the Government’s position on AI chatbots and where they stand in relation to the Online Safety Act, but I have to question how we can have been in a state of ambiguity for so long. We are very close to ChatGPT rolling out erotica on its platform for verified adults. Six months ago, the Wall Street Journal highlighted the deeply disturbing issue of digital companion bots engaging in sexual chat with users, which told them they were underage. Further, they willingly played out scenarios such as “submissive schoolgirl”. Another bot purporting to be a 12 year-old boy promised that it would not tell its parents about dating a user identifying himself as an adult man. Professor Clare McGlynn KC has already raised concerns about what she has coined chatbot-driven VAWG, the tech itself being designed to be sexually suggestive and to engage in grooming and coercive behaviours. Internet Matters found that 64 % of children use chatbots. The number of companion apps has rapidly developed and researchers at Bournemouth University are already warning about the addictive potential of these services.

The Government and the regulator cannot afford to be slow in clarifying the position of these services. It begs a wider question of how we can be much more agile in our response and continually horizon-scan, as legislation will always struggle to keep pace with the evolution of technology. This is the harm we are talking about now, but how will it evolve tomorrow? Where will we be next month or next year? It is vital that both the Government and the regulator become more agile and respond at pace. I look forward to the Minister’s response to the noble Baroness’s amendments.

Photo of Lord Russell of Liverpool Lord Russell of Liverpool Deputy Chairman of Committees, Deputy Speaker (Lords)

My Lords, I shall speak very briefly. Earlier—I suppose it was this morning—we talked about child criminal exploitation at some length, thanks particularly to the work of the noble Baroness, Lady Casey, and Professor Jay. Essentially, what we are talking about in this group of amendments is child commercial exploitation. All these engines, all these technologies, are there for a commercial purpose. They have investors who are expecting a return and, to maximise the return, these technologies are designed to drive traffic, to drive addiction, and they do it very successfully. We are way behind the curve—we really are.

I echo what the noble Baroness, Lady Morgan, said about the body of knowledge within Parliament, in both Houses, that was very involved in the passage of the Online Safety Act. There is a very high level of concern, in both Houses, that we were perhaps too ambitious in assuming that a regulator that had not previously had any responsibilities in this area would be able to live up to the expectations held, and indeed some of the promises made, by the Government during the passage of that Act. I think we need to face up to that: we need to accept that we have not got it off to as good a start as we wanted and hoped, and that what is happening now is that the technologies we have been hearing about are racing ahead so quickly that we are finding it hard to catch up. Indeed, looking at the body language and the physiognomies of your Lordships in the Chamber, looking at the expressions on our faces as some of what we were talking about is being described, if it is having that effect on us, imagine what effect it is having on the children who in many cases are the subjects of these technologies.

I plead with the Minister to work very closely with his new ministerial colleague, the noble Baroness, Lady Lloyd, and DSIT. We really need to get our act together and focus; otherwise, we will have repeats of these sorts of discussions where we raise issues that are happening at an increasing pace, not just here but all around the world. I fear that we are going to be holding our hands up, saying “We’re doing our best and we’re trying to catch up”, but that is not good enough. It is not good enough for my granddaughter and not good enough for the extended families of everybody here in this Chamber. We really have to get our act together and work together to try to catch up.

Photo of Lord Bethell Lord Bethell Conservative

My Lords, I too support the amendments in this group, particularly those tabled by my noble friend Lord Nash on security software and by the noble Baroness, Lady Kidron, on AI-generated child sexual abuse material. I declare my interest as a trustee of the Royal Society for Public Health.

As others have noted, the Online Safety Act was a landmark achievement and, in many ways, something to be celebrated, but technology has not stood still—we said it at the time—and nor can our Laws. It is important that we revisit it in examining this legislation, because generative AI presents such an egregious risk to our children which was barely imaginable even two years ago when we were discussing that Act. These amendments would ensure that our regulatory architecture keeps pace.

Amendment 266 on AI CSAM risk assessment is crucial. It addresses a simple but profound question: should the provider of a generative AI service be required to assess whether that service could be used to create or facilitate child sexual abuse material? Surely the answer is yes. This is not a theoretical risk, as we have heard in testimony from many noble Lords. We know that AI can generate vivid images, optimised on a dataset scraped from children themselves on the open internet, and that can be prompted to create CSAM-like content. On this, there is no ambiguity at all. We know that chatbots trained on the vast corpora of text from children can be manipulated to generate grooming scripts and sexualised narratives to engage children and make them semi-addicted to those conversations. We know that these tools are increasingly accessible, easy to use and almost impossible to monitor by parents and, it seems, regulators.

These amendments create a proportionate duty. Providers regulated by the Online Safety Act would report identified risks to ofcom and agree steps to mitigate them, backed by Ofcom’s enforcement powers. That must the right thing to do.

Amendments 479 and 480, on large language models and chatbots, address a related gap. The Online Safety Act’s user-to-user and search services categories were designed for the pre-generative AI world, despite our efforts to bring it up to speed. They do not neatly capture LLMs used to search interfaces or conversational agents that retrieve and synthesise content in response to user prompts, and we have heard such vivid testimony of what those dangers are to children.

I will bring a public health dimension to this debate since these amendments are consistent with a public health approach. In health systems, we do not wait for harm to manifest itself or show symptoms before we act. We assess risk, implement controls and monitor outcomes; that is the framework in which we apply ourselves. The same logic should apply here. We do not need to wait for an epidemic of AI-generated CSAM before requiring providers to assess and mitigate risk. Prevention is always preferable to cure and in the context of child sexual abuse, in which every image represents a real or simulated violation, prevention is a moral imperative, and there enough biomarkers already in the digital world for us to know that there is a severe risk.

I hope very much indeed that the Government will look seriously at this group of amendments.

Photo of Baroness Bertin Baroness Bertin Conservative 5:15, 27 November 2025

My Lords, I support this group of amendments. What a speech my friend, the noble Baroness, Lady Kidron, made; I commend all the speeches that have been made. If the Government only do one thing with this Bill, it should be to take on this group of amendments.

It is utterly terrifying. I addressed a teaching conference this week, with the safeguarding leads of many schools around the country, and they are tearing their hair out about it. The kids are on this stuff 100%, as we have seen from the statistics. The other thing they said to me, which the noble Baroness mentioned, is that parents either know about it and are terrified about how to address it, or they do not know about it, and I am not sure which is worse.

I reiterate that we have to get ahead of this, as the noble Baroness said. The Government must get ahead of this; otherwise, the dangers are just too huge to think about. I will keep this brief because I will speak about it more in due course, but my team and I went on a chatbot and we were “Lily”, and within about three seconds we were having an incestuous conversation with our father. It was absolutely crackers—terrible—so I ask the Government to please take on board these recommendations.

Photo of Baroness Royall of Blaisdon Baroness Royall of Blaisdon Labour

My Lords, I was not intending to speak and I have nothing to add to all the brilliant speeches that have been made. I did not participate in the debates on the Online Safety Act. I feel horribly naive; I find this debate utterly terrifying and the more that parents know about these things, the better. I very much hope that my noble friend will be able to take this back and discuss these issues with people in this Chamber and the House of Commons. We cannot be behind the curve all the time; we have got to grip this to protect our children and our grandchildren.

Photo of Lord Hampton Lord Hampton Crossbench

My Lords, I briefly add my support to all these amendments, particularly the Amendment of the noble Lord, Lord Nash, which is fascinating. If we can get the software to do this, then why would we not? I offer a challenge to ofcom, the Government and tech firms. If they can produce such sophisticated software that it can persuade children to kill themselves, why are BT and eBay’s chatbots so rubbish? We have to make AI a force for good, not for evil.

Photo of Lord Hacking Lord Hacking Labour

My Lords, having arrived in this House a very long time ago—53 years ago—I know this House works best if it treats legislation as an evolutionary process. The Online Safety Act seemed to be a very good Act when we passed it two years ago, but now we have further, drastic evidence, which we have heard in this debate. I am confident my noble friend the Minister will treat the speeches made in this debate as part of the evolutionary process which, I emphasise again, this House does best.

Photo of Baroness Doocey Baroness Doocey Liberal Democrat Lords Spokesperson (Policing)

My Lords, I thank the noble Baroness, Lady Kidron, for bringing forward these amendments and for explaining them so clearly. The understanding of the Independent Reviewer of Terrorism Legislation, Jonathan Hall, is that AI chatbots do not trigger the illegal content duties since these tools are not considered to show mental intent. As a result, chatbots can generate prompts that are not classified as illegal, even though the exact same content would be illegal and subject to regulation if produced by a human. I find that quite extraordinary.

By accepting these amendments, the Government would be acting decisively to address the fast-evolving threat which this year saw abusive material of sexual content for children rise by 380%. In April 2024, the Internet Watch Foundation reported that a manual circulating on the dark web, which the Minister referred to earlier, instructed paedophiles to use AI to create nude images of children, then use these to extort or coerce money or extreme material from the young victims. The charity warned that AI was generating astoundingly realistic abusive content.

Text-to-image generative AI tools and AI companion apps have proliferated, enabling abusers to create AI chatbot companions specifically to enable realistic and abusive roleplay with child avatars. Not only do they normalise child sexual abuse, but evidence shows that those who abuse virtual children are much more likely to go on to abuse real ones. Real children are also increasingly subjected to virtual rape and sexual abuse online. It is wrong to dismiss this as less traumatic simply because it happens in a digital space.

The measures in the Bill are welcome but, given the speed at which technology is moving, how easy or otherwise will it be to future-proof it in order to keep pace with technology once the Bill is enacted?

Photo of Lord Davies of Gower Lord Davies of Gower Shadow Minister (Home Office)

My Lords, I am grateful to all noble Lords who have contributed to this extremely important debate, particularly the noble Baroness, Lady Kidron, and my noble friend Lord Nash for their continued efforts on the protection of children online.

This group should unite the whole Committee. We can be in no doubt about the need to safeguard children in an environment where technology is evolving at unprecedented speed and where the risk of harm, including the creation and dissemination of child sexual abuse material, is escalating. It is a sad truth that, historically, Governments have been unable to keep pace with evolving technology. As a consequence, this can mean legislation coming far too late.

Amendment 266, tabled by the noble Baroness, Lady Kidron, would require providers of online services, including generative AI systems, to conduct risk assessments on the potential use of their platforms to create child sexual abuse images. The Committee has heard compelling arguments about the need for meaningful responsibilities to be placed on platforms and developers, particularly where systems are capable of misuse at scale. We recognise the seriousness of the challenge that she has outlined, and I very much look forward to what the Government have to say in response.

On my noble friend Lord Nash’s amendment, we are particularly sympathetic to the concerns that underpin his proposal. His amendment would mandate the installation of tamper-proof software on relevant devices to prevent the creation, viewing and sharing of child sexual abuse material. My noble friend has made a powerful case that prevention at source must form part of the comprehensive strategy to protect children. While there are practical questions that will require careful examination, his amendment adds real value to the discussion. I am grateful for his determined focus on this issue, and I hope the Government also take this amendment very seriously.

Similarly, Amendments 479 and 480, also tabled by the noble Baroness, Lady Kidron, speak to the responsibilities of AI search tools and AI chatbots. The risk of such technologies being co-opted for abusive purposes is not theoretical; these threats are emerging rapidly and require a response proportionate to the harm.

From these Benches, we are sympathetic to the objectives across this group of amendments and look forward to the Government’s detailed response and continuing cross-party work to ensure the strongest protections for children in an online world. As has been said several times throughout Committee, protecting children must remain our highest priority. I hope the Government take these amendments very seriously.

Photo of Lord Hanson of Flint Lord Hanson of Flint The Minister of State, Home Department

I am grateful to the noble Baroness, Lady Kidron, for the way she introduced this group of amendments and for her tireless work to protect children online. I say on behalf of all noble Lords that the support she has received today across the Committee shows that her work is vital, especially in the face of emerging technologies, such as generative AI, which present opportunities but, sadly, also have a darker side with new risks for criminal misuse.

She has received the support of the noble Baronesses, Lady Morgan of Cotes, Lady Boycott, Lady Bertin and Lady Doocey, my noble friends Lady Berger, Lady Royall of Blaisdon and Lord Hacking, the noble Lords, Lord Bethell, Lord Russell of Liverpool, Lord Hampton and Lord Davies of Gower, the noble Viscount, Lord Colville of Culross, and others to whom I will refer later. That is quite an array of colleagues in this House. It is my job to respond to this on behalf of the Government, and I will try to be as helpful as I can to the noble Baroness.

The Government share her desire to protect the public, especially children, online, and are committed to protecting all users from illegal online content. We will continue to act to keep citizens safe. Amendment 266 seeks to create a new duty on online service providers—including those already regulated under the Online Safety Act—to assess and report to ofcom or the National Crime Agency on the risk that their services could be used to create or facilitate the generation of AI child sexual abuse material. The amendment would also require online service providers to implement measures to mitigate and manage the risks identified.

I say to the noble Baroness that UK law is already clear: creating, possessing or distributing child sexual abuse images, including those generated by AI, is already illegal, regardless of whether they depict a real child or not. Child sexual abuse material offences are priority offences under the Online Safety Act. The Act requires in-scope services to take proactive steps to prevent such material from appearing on their services and to remove it swiftly if it does.

As she will know, the Government have gone even further to tackle these appalling crimes through the measures in the Bill. I very much welcome her support for Clause 63. We are introducing a world-leading offence criminalising the possession, adaptation and supply of, or offer to supply, an AI model that has been fine-tuned by offenders to create child sexual abuse material. As I mentioned earlier, we are also extending the existing paedophile manual offence to cover advice on how to abuse AI to create child sexual abuse material.

We have also introduced measures that reflect the critical role that AI developers play in ensuring their systems are not misused. To support the crucial work of the Government’s AI Security Institute, we have just debated and agreed a series of amendments in the previous group to provide authorised bodies with the powers to legally test commercial AI models for extreme pornography and other child sexual abuse material. That is essential to allow experts to safely test measures, and I am pleased that we received the Committee’s support earlier.

I recognise the intent of Amendment 266 but—I say this as genuinely humbly as I can—the assessment of the Government is that it would be unworkable and would place unmanageable and unnecessary operational burdens on both the National Crime Agency and Ofcom. The National Crime Agency is a law enforcement body. It does not have statutory powers, and it does not have the necessary people or expertise to regulate companies or enforce compliance with safety standards for online service providers. It would further create significant legal challenges for businesses or individuals looking to operate an online service in the UK. While it is right that we protect online users from the risks posed by technology, it is vital that we do not criminalise people or businesses without demonstrable culpability.

I agree with the noble Baroness in seeking to prevent abhorrent activity. Child sexual exploitation and abuse is an atrocious crime. The Government have acted and will continue to act to bring perpetrators to justice and keep our children safe. While I appreciate the intention behind the amendment, for the reasons I have set out, I hope that she will reflect on what I have said and not push the amendment further at this stage. I have already outlined that we have and will be implementing legislation and regulation to tackle these risks. I will leave the noble Baroness to reflect upon that in due course.

Photo of Lord Russell of Liverpool Lord Russell of Liverpool Deputy Chairman of Committees, Deputy Speaker (Lords) 5:30, 27 November 2025

If it is beyond the remit of the National Crime Agency and ofcom to do anything about this, perhaps the Minister will tell us who is going to take responsibility and actually enforce what the noble Baroness is trying to persuade the Government to do in the Amendment.

Photo of Lord Hanson of Flint Lord Hanson of Flint The Minister of State, Home Department

All chatbots are regulated under the Online Safety Act. If there is harmful or illegal content or advice in relation to children, it is up to ofcom to take action on those matters. Many of these issues are for DSIT Ministers and Ofcom. I am a Home Office Minister. The noble Baroness has requested a meeting and I will put that to my DSIT ministerial colleagues. I hope they will be able to meet her to reflect upon these issues. Although I am answering for the Bill today, some of these issues are DSIT matters, and it is important that she has an opportunity to raise them with DSIT.

Photo of Baroness Kennedy of The Shaws Baroness Kennedy of The Shaws Labour

My Lords, I was stimulated to rise by something that the noble Baroness, Lady Doocey, said. She was speaking to the reply that had been given by the Minister, and it made me think that what has to be looked at here is the law and its inadequacies in dealing with those who are not human—that is the nature of a robot. The law is constructed around the mental element of mens rea to convict people of a crime. Surely it should be possible for us, in the limited area of dealing with robots, to be able to say that that mental element need not be present in dealing with this kind of offending and that one should be able to construct something that leads back to those who are creatively responsible for bringing them into being.

It reminds me of the argument that is made in the United States about not bothering to restrict guns because it is not guns that kill people but the people using the guns who are responsible. In fact, those who manufacture them might be looked at for the responsibility that they bear for some of this. We should be looking much more creatively at the law. There should be an opportunity for lawyers to look at whether, in this instance with this development—which is so out of the ordinary experience of humankind—we should think about legally changing the rule on mens rea when it comes to robots.

Photo of Lord Hanson of Flint Lord Hanson of Flint The Minister of State, Home Department

There are a number of issues before the Committee today and the Government will reflect on all the points that have been mentioned. However, the view at the moment is that these amendments would risk creating significant legal uncertainty by duplicating and potentially undermining aspects of the Online Safety Act.

Photo of Lord Bethell Lord Bethell Conservative

My Lords, I am enormously grateful to the Minister for reassuring us that all chatbots are captured by the Online Safety Act; that is very good news indeed. Can he reassure us that ofcom will confirm that in writing to the House? I appreciate that he is a Home Office Minister, but he speaks on behalf of all of government. I think it is fair, given the nature of the Bill, that he seeks an answer from Ofcom in this matter.

Photo of Lord Hanson of Flint Lord Hanson of Flint The Minister of State, Home Department

My assessment is that the vast Majority of chatbots are captured—

Noble Lords:

Oh!

Photo of Lord Hanson of Flint Lord Hanson of Flint The Minister of State, Home Department

Many AI chatbots that enable users to share content with each other or search live websites for information are within the scope of the Online Safety Act’s duties. Providers of those services—

Photo of Viscount Colville of Culross Viscount Colville of Culross Deputy Chairman of Committees, Deputy Speaker (Lords)

I want to repeat what I said in my speech. There are some chatbots, such as Replika, that do not have user-to-user functionality. They are created for just one user, and that user cannot pass it on to any other users. There is concern that the law does not cover that and that ofcom does not regulate it.

Photo of Lord Hanson of Flint Lord Hanson of Flint The Minister of State, Home Department

If I may, I will take away those comments. I am responsible for many things in this House, including the Bill, but some of those areas fall within other ministerial departments. I am listening to what noble Lords and noble Baronesses are saying today.

Currently, through Online Safety Act duties, providers of those services are required to undertake appropriate risk assessments and, under the Act’s illegal content duties, platforms must implement robust and timely measures to prevent illegal content appearing on their services. All in-scope providers are expected to have effective systems and processes in place to ensure that the risks of their platform being used for the types of offending mentioned today are appropriately reduced.

ofcom currently has a role that is focused on civil enforcement of duties on providers to assess and mitigate the risks posed by illegal content. Where Ofcom may bring prosecutions in some circumstances, it will do so only in relation to regulatory matters where civil enforcement is insufficient. The proposed approach is not in line with the enforcement regime under the Act at the moment, which is the responsibility of Ofcom and DSIT.

Photo of Baroness Berger Baroness Berger Labour

My noble friend is making really important comments in this regard, but on the specific issue of ofcom, perhaps fuelling much of the concern across the Committee are the comments we have heard from Ofcom. I refer to a briefing from the Molly Rose Foundation, which I am sure other noble Lords have received, which says that uncertainty has been “actively fuelled” by the regulator Ofcom, which has told the Molly Rose Foundation that it intends to maintain “tactical ambiguity” about how the Act applies. That is the very issue that unites us in our concern.

Photo of Lord Hanson of Flint Lord Hanson of Flint The Minister of State, Home Department

I am grateful to my noble friend for that and for her contribution to the debate and the experiences she has brought. The monitoring and evaluation of the online safety regime is a responsibility of DSIT and ofcom, and they have developed a framework to monitor the implementation of the Act and evaluate core outcomes. This monitoring and evaluation is currently tracking the effect of the online safety regime and feeding into a post-implementation review of the 2023 Act. Where there is evidence of a need to go further to keep children safe online, including from AI-enabled harms, the Government will not hesitate to act.

If the noble Baroness, Lady Kidron, will allow DSIT and Ofcom to look at those matters, I will make sure that DSIT Ministers are apprised of the discussion that we have had today. It is in this Bill, which is a Home Office Bill, but it is important that DSIT Ministers reflect on what has been said. I will ensure that we try to arrange that meeting for the noble Baroness in due course.

I want also to talk about Amendments 271A and 497ZA from the noble Lord, Lord Nash, which propose that smartphone and tablet manufacturers, importers and distributors are required to ensure that any device they have is preinstalled with technology that prevents the recording and viewing of child sexual abuse material or similar material accordingly. I acknowledge the noble Lord’s very valid intention concerning child safety and protection, and to prevent the spread of child sexual abuse material online. To that end, there is a shared agreement with the Government on the need to strengthen our already world-leading online safety regime wherever necessary.

I put to the noble Lord, and to the noble Lord, Lord Bethell, on his comments in support, that if nudity detection technology could be effectively deployed at scale, there could be a significant limiting impact on the production and sharing of child sexual abuse material. I accept that, but we must get this right. Application of detection technology that detects and blocks all nudity, adult and child, but which is primarily targeted at children, would be an effective Intervention. I and colleagues across government want to gather evidence about the application of such technology and its effectiveness and impact. However, our assessment is that further work is needed to understand the accuracy of such tools and how they may be implemented.

We must also consider the risks that could arise from accepting this Amendment, including legitimate questions about user privacy and data security. If it helps the noble Lord, Lord Nash, we will continue to assess the effect of detection tools on the performance of mobile device so that we can see how easy it is to circumvent them, how effective they are and a range of other matters accordingly. The Government’s focus is on protective measures within the Online Safety Act, but we are actively considering the potential benefits of the technology that the noble Lord has mentioned and others like it in parallel. There will be further future government interventions but they must be proportionate and driven by evidence. At the moment, we do not have sufficient evidence to ensure that we could accept the amendment from the noble Lord, but the direction of travel is one that we would support.

Photo of Lord Nash Lord Nash Conservative

Will the Minister meet me and representatives from software companies to explain why they say this technology works?

Photo of Lord Hanson of Flint Lord Hanson of Flint The Minister of State, Home Department

I am very happy to arrange a meeting with an appropriate Minister. I would be very happy to sit in on it. Other Ministers may wish to take the lead on this, because there are technology issues as well. I have Home Office responsibilities across the board, but I have never refused a meeting with a Member of this House in my 16 months here and I am not going to start now, so the answer to that question is yes. The basic presumption at the moment is that we are not convinced that the technology is yet at the stage that the noble Lord believes it to be, but that is a matter for future operation. I again give him the assurance that, in the event that the technology proves to be successful, the Government will wish to examine it in some detail.

I have absolutely no doubt that we will revisit these matters but, for the moment, I hope that the noble Baroness can withdraw her Amendment.

Photo of Baroness Kidron Baroness Kidron Crossbench

I pay tribute to the noble Lord, Lord Nash, for his Amendment and his fierce following of this issue, and for bringing it to our attention. I recognise that this is a Home Office Bill and that some of these things cross to DSIT, but we are also witnessing crime. The Home Office must understand that not everything can be pushed to DSIT.

Your Lordships have just met the tech Lords. These are incredibly informed people from all over the Chamber who share a view that we want a technological world that puts kids front and centre. We are united in that and, as the Minister has suggested, we will be back.

I have three very quick points. First, legal challenges, operational difficulties and the capacity of the NCA and ofcom were the exact same reasons why Clause 63 was not in the Online Safety Bill or the Data (Use and Access) Bill. It is unacceptable for officials to always answer with those general things. Many noble Lords said, “It’s so difficult”, and, “This is new”, with the Online Safety Bill. It is not new: we raised these issues before. If we had acted three or four years ago, we would not be in this situation. I urge this Government to get on the front foot, because we know what is coming.

I really feel I must say my final two points. One is that I spoke last Friday to engineers from a company that I will not name and I asked them about safety for chatbots. They said, “Yeah, you can train them for safety”. I said, “Who does and who doesn’t?”, and they said, “Well, this one does and this one doesn’t”. It is totally technically possible to do these things. We are not looking for a perfect world; we are looking for a world in which technology companies are treated the same as fridge companies, hoover companies or any other company, where they cannot put out a product that is known to be unsafe—where they have to prove the safety, not have us prove it is unsafe.

Finally, I feel so frustrated, but I am going to say this anyway: there are parents out there with children who are hooked to these things. They are ringing me. I do not think an independent, unelected politician is the right person for a parent with a child who may or may not be going to commit suicide or may or may not be being groomed. That is for the Government. With that, I beg leave to withdraw the amendment.

Amendment 266 withdrawn.

Clause 63, as amended, agreed.

Amendment

As a bill passes through Parliament, MPs and peers may suggest amendments - or changes - which they believe will improve the quality of the legislation.

Many hundreds of amendments are proposed by members to major bills as they pass through committee stage, report stage and third reading in both Houses of Parliament.

In the end only a handful of amendments will be incorporated into any bill.

The Speaker - or the chairman in the case of standing committees - has the power to select which amendments should be debated.

amendment

As a bill passes through Parliament, MPs and peers may suggest amendments - or changes - which they believe will improve the quality of the legislation.

Many hundreds of amendments are proposed by members to major bills as they pass through committee stage, report stage and third reading in both Houses of Parliament.

In the end only a handful of amendments will be incorporated into any bill.

The Speaker - or the chairman in the case of standing committees - has the power to select which amendments should be debated.

Clause

A parliamentary bill is divided into sections called clauses.

Printed in the margin next to each clause is a brief explanatory `side-note' giving details of what the effect of the clause will be.

During the committee stage of a bill, MPs examine these clauses in detail and may introduce new clauses of their own or table amendments to the existing clauses.

When a bill becomes an Act of Parliament, clauses become known as sections.

OFCOM

Ofcom is the independent regulator and competition authority for the UK communications industries, with responsibilities across television, radio, telecommunications and wireless communications services.

Ofcom Web Site http://www.ofcom.org.uk

Secretary of State

Secretary of State was originally the title given to the two officials who conducted the Royal Correspondence under Elizabeth I. Now it is the title held by some of the more important Government Ministers, for example the Secretary of State for Foreign Affairs.

clause

A parliamentary bill is divided into sections called clauses.

Printed in the margin next to each clause is a brief explanatory `side-note' giving details of what the effect of the clause will be.

During the committee stage of a bill, MPs examine these clauses in detail and may introduce new clauses of their own or table amendments to the existing clauses.

When a bill becomes an Act of Parliament, clauses become known as sections.

Bills

A proposal for new legislation that is debated by Parliament.

Minister

Ministers make up the Government and almost all are members of the House of Lords or the House of Commons. There are three main types of Minister. Departmental Ministers are in charge of Government Departments. The Government is divided into different Departments which have responsibilities for different areas. For example the Treasury is in charge of Government spending. Departmental Ministers in the Cabinet are generally called 'Secretary of State' but some have special titles such as Chancellor of the Exchequer. Ministers of State and Junior Ministers assist the ministers in charge of the department. They normally have responsibility for a particular area within the department and are sometimes given a title that reflects this - for example Minister of Transport.

Ofcom

Ofcom is the independent regulator and competition authority for the UK communications industries, with responsibilities across television, radio, telecommunications and wireless communications services.

Ofcom Web Site http://www.ofcom.org.uk

majority

The term "majority" is used in two ways in Parliament. Firstly a Government cannot operate effectively unless it can command a majority in the House of Commons - a majority means winning more than 50% of the votes in a division. Should a Government fail to hold the confidence of the House, it has to hold a General Election. Secondly the term can also be used in an election, where it refers to the margin which the candidate with the most votes has over the candidate coming second. To win a seat a candidate need only have a majority of 1.

laws

Laws are the rules by which a country is governed. Britain has a long history of law making and the laws of this country can be divided into three types:- 1) Statute Laws are the laws that have been made by Parliament. 2) Case Law is law that has been established from cases tried in the courts - the laws arise from test cases. The result of the test case creates a precedent on which future cases are judged. 3) Common Law is a part of English Law, which has not come from Parliament. It consists of rules of law which have developed from customs or judgements made in courts over hundreds of years. For example until 1861 Parliament had never passed a law saying that murder was an offence. From the earliest times courts had judged that murder was a crime so there was no need to make a law.

House of Commons

The House of Commons is one of the houses of parliament. Here, elected MPs (elected by the "commons", i.e. the people) debate. In modern times, nearly all power resides in this house. In the commons are 650 MPs, as well as a speaker and three deputy speakers.

intervention

An intervention is when the MP making a speech is interrupted by another MP and asked to 'give way' to allow the other MP to intervene on the speech to ask a question or comment on what has just been said.