Amendment 266

Part of Crime and Policing Bill - Committee (4th Day) (Continued) – in the House of Lords at 4:45 pm on 27 November 2025.

Alert me about debates like this

Photo of Baroness Berger Baroness Berger Labour 4:45, 27 November 2025

My Lords, I support Amendments 479 and 480, which seek to prevent chatbots producing illegal content. I also support the other amendments in this group. AI chatbots are already producing harmful, manipulative and often racist content. They have no age protections and no warnings or information about the sources being used to generate the replies. Nor is there a requirement to ensure that AI does not produce illegal content. We know that chatbots draw their information from a wide range of sources that are often unreliable and open to manipulation, including blogs, open-edit sites such as Wikipedia, and messaging boards, and as a result they often produce significant misinformation and disinformation.

I will focus on one particular area. As we have heard in the contributions so far, we know that some platforms generate racist content. Looking specifically at antisemitism, we can see Holocaust denial, praise of Hitler and deeply damaging inaccuracies about Jewish history. We see Grok, the X platform, generating numerous antisemitic comments, denying the scale of the Holocaust, praising Adolf Hitler and, as recently as a couple of months ago, using Jewish-sounding surnames in the context of hate speech.

Impressionable children and young people, who may not know how to check the validity of the information they are presented with, can so easily be manipulated when exposed to such content. This is particularly concerning when we know that children as young as three are using some of these technologies. We have already heard about how chatbots in particular are designed in this emotionally manipulative way, in order to boost engagement. As we have heard—it is important to reiterate it—they are sycophantic, affirming and built to actively flatter.

If you want your AI chatbot or platform not to flatter you, you have to specifically go to the personalisation page, as I have done, and be very clear that you want responses that focus on substance over praise, and that it should skip compliments. Otherwise, these platforms are designed to act completely the other way. If a person acted like this in some circumstances, we would call it emotional abuse. These design choices mean that young people—teens and children—can become overly trusting and, as we have heard in the cases outlined, reliant on these bots. In the most devastating cases, we know that this focus on flattery has led to people such as Sophie Rottenberg and 16 year-old Adam Raine in America taking their own lives on the advice of these AI platforms. Assisting suicide is illegal, and we need to ensure that this illegality extends to chatbots.

As adults we know that chatbots are not human, but certainly children and young people do not see it that way. I am reminded of the experience of generating a quiz on a long car journey with my young children using one of these AI platforms, and my children literally thinking that they were speaking to an adult when that was not the case. A study by the University of Cambridge has found that many children see chatbots as quasi-human and therefore trustworthy, but it is key that we remember that they do not have human empathy.

It was in 2023 that Snapchat’s My AI gave adult researchers posing as a 13 year-old girl tips on how to lose her virginity to a 31 year-old man; that would amount to statutory rape in real life. As AI becomes more interwoven with our daily lives, the law must treat chatbots not as toys but as high-impact systems capable of shaping beliefs, identity and young people’s mental health, as well as impacting on all our safety. Recognising some of the risks in other jurisdictions, we have seen the state of California recently introduce a law targeting companion chatbots, which includes guardrails against persuading users to self-harm and an obligation to remind users that they are conversing with a machine.

This group of amendments goes some way to pursuing urgent changes that we need to see, which includes mandatory effective age assurance for all generative AI used by children; chatbots being transparent in the where information comes from and whether it could have been used and manipulated, with a specific focus on images and text to be marked, so that deepfakes and disinformation content cannot spread unchallenged; and ensuring that chatbots must break confidence when users are expressing suicidal ideation, and redirect users to real support in real life, should someone be expressing that suicidal ideation. We need to see rapid cross-platform response mechanisms to shut down co-ordinated disinformation surges, especially those targeting protected or vulnerable groups. I echo the calls we have heard so far during this group that, as AI continues to develop, we must ensure that we do everything possible to respond to its advances, ensuring that our policies keep us and, most importantly, our children and young people, safe.

I am very pleased to support this group of amendments on this very important journey.

Amendment

As a bill passes through Parliament, MPs and peers may suggest amendments - or changes - which they believe will improve the quality of the legislation.

Many hundreds of amendments are proposed by members to major bills as they pass through committee stage, report stage and third reading in both Houses of Parliament.

In the end only a handful of amendments will be incorporated into any bill.

The Speaker - or the chairman in the case of standing committees - has the power to select which amendments should be debated.