Online Safety Bill - Committee (6th Day) – in the House of Lords at 3:30 pm on 11 May 2023.
Moved by Baroness Merron
52: After Clause 15, after Clause 15, insert the following new Clause—“Health disinformation and misinformation(1) This section sets out the duties about harmful health disinformation and misinformation which apply in relation to Category 1 services.The duties(2) A duty to carry out and keep up to date a risk assessment of the risks presented by harmful health disinformation and misinformation that is present on the service.(3) A duty to develop and maintain a policy setting out the service’s approach to the treatment of harmful health disinformation and misinformation on the service. (4) A duty to explain in the policy how the service’s approach to the treatment of harmful disinformation and misinformation is designed to mitigate or manage any risks identified in the latest risk assessment.(5) A duty to summarise the policy in the terms of service, and to include provisions in the terms of service about how that content is to be treated on the service.(6) A duty to ensure that the policy, and any related terms of service, are—(a) clear and accessible, and(b) applied consistently.(7) In this section, “harmful health disinformation and misinformation” means content which contains information which—(a) is false or misleading in a material respect; and(b) presents a material risk of significant harm to the health of an appreciable number of persons in the United Kingdom.”Member’s explanatory statementThis new Clause would introduce a variety of duties on Category 1 platforms, in relation to their treatment of content which represents harmful health misinformation and disinformation.
My Lords, I shall speak to this group which includes Amendments 52, 99 and 222 in my name. These are complemented by Amendments 223 and 224 in the name of my noble friend Lord Knight. I am grateful to the noble Lords, Lord Clement-Jones and Lord Bethell, and to the noble Baroness, Lady Bennett, for putting their names to the amendments in this group. I am also grateful to the noble Lord, Lord Moylan, for tabling Amendments 59, 107 and 264. I appreciate also the work done by the APPG on Digital Regulation and Responsibility and by Full Fact on this group, as well as on many others in our deliberations.
These amendments would ensure that platforms were required to undertake a health misinformation and disinformation risk assessment. They would also require that they have a clear policy in their terms of service on dealing with harmful, false and misleading health information, and that there are mechanisms to support and monitor this, including through the effective operation of an advisory committee which Ofcom would be required to consult. I appreciate that the Minister may wish to refer to the false communication offence in Clause 160 as a reason why these amendments are not required. In order to pre-empt this suggestion, I put it to him that the provision does not do the job, as it covers only a user sending a knowingly false communication with the intention of causing harm, which does not cover most of the online health misinformation and disinformation about which these amendments are concerned.
Why does all this matter? The stakes are high. False claims about miracle cures, unproven treatments and dangerous remedies can and do spread rapidly, leading people to make the poorest of health decisions, with dire consequences. We do not have to go far back in time to draw on the lessons of our experience. It is therefore disappointing that the Government have not demonstrated, through this Bill, that they have learned the lessons of the Covid-19 pandemic. This is of concern to many health practitioners and representatives, as well as to Members of your Lordships’ House. We all remember the absolute horror of seeing false theories being spread quickly online, threatening to undermine the life-saving vaccine rollout. In recent years, the rising anti-vaccine sentiment has certainly contributed to outbreaks of preventable diseases that had previously been eradicated. This is a step backwards.
In 2020, an estimated 5,800 people globally were admitted to hospital because of false information online relating to Covid-19, with at least 800 people believed to have died because they followed this misinformation or disinformation. In 2021, the Royal College of Obstetricians and Gynaecologists found that only 40% of women offered the vaccine against Covid-19 had accepted it, with many waiting for more evidence that it would be safe. It is shocking to recall that, in October 2021, one in five of the most critically ill Covid patients was an unvaccinated, pregnant woman.
If we look beyond Covid-19, we see misinformation and disinformation affecting many other aspects of health. I will give a few examples. There are false claims about cancer treatment—for example, lemons treat cancer better than chemotherapy; tumours are there to save your life; cannabis oil cures cancer; rubbing hydrogen peroxide on your skin will treat cancer. Just last year, the lack of publicly available information about Mpox fuelled misinformation online. There is an issue about the Government’s responsibility for ensuring that there is publicly available information about health risks. In this respect, the lack of it—the void—led to a varied interpretation and acceptance of the public health information that was available, limited though it was. UNAIDS also expressed concern that public messaging on Mpox used language and imagery that reinforced homophobic and racist stereotypes.
For children, harmful misinformation has linked the nasal flu vaccine to an increase in Strep A infections. In late 2022, nearly half of all parents falsely believed these claims, such that the uptake of the flu vaccine among two and three year-olds dropped by around 11%. It is not just that misinformation and disinformation may bombard us online and affect us; there are also opportunities for large, language-model AIs such as ChatGPT to spread misinformation.
The Government had originally promised to include protections from harmful false health content in their indicative list of harmful content that companies would have been required to address under the now removed adult safety duties, yet we find that the Bill maintains the status quo, whereby platforms are left to their own devices as to how they tackle health misinformation and disinformation, without the appropriate regulatory oversight. It is currently up to them, so they can remove it at scale or leave it completely unchecked, as we recently saw when Twitter stopped enforcing its Covid-19 misinformation policy. This threatens not just people’s health but their freedom of expression and ability to make proper informed decisions. With that in mind, I look forward to amendments relating to media literacy in the next group that the Committee will consider.
I turn to the specific amendments. The new clause proposed in Amendment 52 would place a duty on category 1 platforms to undertake a health misinformation risk assessment and set out a policy on their treatment of health misinformation content. It would also require that the policy and related terms of service are consistently applied and clear and accessible—something that we have previously debated in this Committee. It also defines what is meant by
“harmful health disinformation and misinformation”— and, again, on that we have discussed the need for clarity and definition.
Amendment 99 would require Ofcom to consult an advisory committee on disinformation and misinformation when preparing draft codes of practice or amendments to such codes. Amendment 222 is a probing amendment and relates to the steps, if any, that Ofcom will be expected to take to avoid the advisory committee being dominated by representatives of regulated services. It is important to look at how the advisory committee is constructed, as that will be key not just to the confidence that it commands but to its effectiveness.
Amendment 223, in the name of my noble friend Lord Knight, addresses the matter of timeliness in respect of the establishment of the advisory committee, which should be within six months of the Bill being passed. Amendment 224, also in the name of my noble friend Lord Knight, would require the advisory committee to consider as part of its first report whether a dedicated Ofcom code of practice in this area would be effective in the public interest. This would check that we have the right building blocks in place. With that in mind, I beg to move.
My Lords, it is a great honour to rise after the noble Baroness, Lady Merron, who spoke so clearly about Amendment 52 and the group of amendments connected with health misinformation, some of which stand also in my name.
As the noble Baroness rightly pointed out, we have known for a long time the negative impact of social media, with all its death scrolls, algorithms and rabbit holes on vaccine uptake. In 2018, the University of Southampton did a study of pregnant women and found that those who reported using social media to research antenatal vaccinations were 58% less likely to accept the whooping cough vaccine. Since then, things have only got worse.
As a junior Health Minister during the pandemic, I saw how the successful vaccine rollout was at severe risk of being undermined by misinformation, amplified by foreign actors and monetised by cynical commercial interests. The challenge was enormous. The internet, as we know, is a highly curated environment that pushes content, functions and services that create an emotional response and retain our attention. Social media algorithms are absolutely the perfect tool for conspiracy theorists, and a pandemic necessarily raises everyone’s concerns. It was unsurprising that a lot of people went down various rabbit holes on health information.
The trust between our clinical professionals and their patients relies on a shared commitment to evidence-based science. That can quickly go out of the window if the algorithms are pushing rousing content that deliberately plays into people’s worst fears and anxieties, thereby displacing complex and nuanced analysis with simplistic attention-seeking hooks, based sometimes on complete nonsense. The noble Baroness, Lady Merron, mentioned lemons for cancer as a vivid example of that.
At the beginning of the vaccine programme, a thorough report by King’s College London, funded by the NIHR health protection research unit, found that 14% of British adults believed the real purpose of mass vaccination against coronavirus was to track and control the population. That rose to an astonishing 42% among those who got their information from WhatsApp, 39% for YouTubers, 29% from the Twitterati and 28% from Facebookers. I remember that, when those statistics came through, it put this important way out of the pandemic in jeopardy.
I remind the Committee that a great many people make money out of such fear. I highly recommend the Oxford University Journal of Communication article on digital profiteering for a fulsome and nuanced guide to the economics of the health misinformation industry. I also remind noble Lords that foreign actors and states are causing severe trouble in this area. “Foreign disinformation” social media campaigns are linked to falling vaccination rates, according to an international time-trend analysis published by BMJ Global Health.
As it happens, in the pandemic, the DHSC, the Cabinet Office and a wide group throughout government worked incredibly thoughtfully on a communications strategy that sought to answer people’s questions and apply the sunlight of transparency to the vaccine process. It balanced the rights to freedom of expression with protecting our central strategy for emerging from the pandemic through the vaccine rollout. I express considerable thanks to those officials, and the social media industry, who leant into the issue more out of a sense of good will than any legal obligation. I was aware of some of the legal ambiguities around those times.
Since then, things have gone backwards, not forwards. Hesitancy in the UK has risen, with a big impact on vaccine take-up rates. We are behind on 13 out of the 14 routine vaccine programmes, well behind the 95% target set by the World Health Organization. The results are clear: measles is rising because of vaccine uptake falling, and that is true of many common, avoidable diseases. As for the platforms, Twitter’s recent decision at the end of last year to suddenly stop enforcing its Covid-19 misinformation policy was a retrograde step and possibly the beginning of a worrying trend that we should all be conscious of, and is one of the motivating reasons for this amendment.
Unfortunately, the Government’s decision to remove from the Bill the provisions on content harmful to adults, and with that the scope to include harmful health content, has had unintended consequences and left a big gap. We will have learned nothing from the pandemic if we do not act to plug that gap. The amendment and associated amendments in the group seek to address this by introducing three duties, as the noble Baroness, Lady Merron explained.
The first requirement is an assessment of the risks presented by harmful health disinformation and misinformation. Anyone who has been listening to these debates will recognise that this very much runs with the grain of the Bill’s approach and is consistent with many of the good things already in the Bill. Risk assessments are a very valuable tool in our approach to misinformation. I remind noble Lords that, for this Bill, “content” has a broad meaning that includes services and functions of a site, including the financial exploitation of that content. Secondly, the amendment would require large platforms to publish a policy setting out their approach to health misinformation. Each policy would have to explain how it is designed to mitigate or manage risks and should be kept up to date and maintained. That kind of transparency is at the heart of how we hold platforms to account. Lastly, platforms would be required to summarise their health misinformation policy in terms that consumers can properly understand.
This approach is consistent with the spirit of the Bill’s treatment of many harms: we are seeking transparency and we are creating accountability, but we are not mandating protocols. The consequences are clear. Users, health researchers and internet analysts would be able to see clearly how a platform proposes to deal with health misinformation that they may encounter on a particular service and make informed decisions as a result. The regulator would be able to see clearly what the nature of these risks is.
May I briefly tackle some natural concerns? On the question of protection of freedom of expression, my noble friend Lord Moylan rightly reminded us on Tuesday of Article 19 of the UN Universal Declaration of Human Rights: everyone has the freedom to express opinions and speech. On this point, I make it clear that this amendment would not require platforms to remove health misinformation from their service or to prescribe particular responses. In fact, I would go further. I recognise that it is important to have a full debate about the efficacy, safety and financial wisdom of treatments, cures and vaccines. This amendment would do nothing to close down that debate. It is about clarity. The purpose of the amendment is to prevent providers ducking the question about how they handle health misinformation. To that extent, it would help both those who are worried about health misinformation and those who are worried about being branded as sharing health misinformation to know where the platforms are coming from. It would ensure that providers establish what is happening on their service, what the associated risks to their users are, and then to shine a light on how they intend to deal with it.
I also make it clear that this is not just about videos, articles and tweets. We should also be considering whether back-end payment mechanisms, including payment intermediaries, donation collection services and storefront support, should be used to monetise health misinformation and enable bad actors. During the pandemic, the platforms endorsed the principle that no company should be profiting from Covid-19 vaccine misinformation, for instance. It is vital that this is considered as part of the platforms’ response to health misinformation. We should have transparency about whether platforms such as PayPal and Google are accepting donations, membership or merchandise payments from known misinformation businesses. Is Amazon, for instance, removing products that are used to disseminate health misinformation? Are crowdfunding websites hosting health misinformation campaigns from bad actors?
To anticipate my noble friend the Minister, I say that he will likely remind us that there are measures already in place in the Bill if the content is criminal or likely to be viewed by children, and I welcome those provisions. However, as the Bill stands, the actual policies on misinformation and the financial exploitation of that content will be a matter of platform discretion, with no clarity for users or the regulator. It will be out of sight of clear regulatory oversight. This is a mistake, just as Twitter has just shown, and that is why we need this change.
Senior clinicians including Sir Jeremy Farrar, Professor John Bell and the noble Lord, Lord Darzi, have written to the Secretary of State to raise their concerns. These are serious players voicing serious concerns. The approach in Amendment 52 is, in my view, the best and most proportionate way to protect those who are most vulnerable to false and misleading information.
My Lords, I shall speak to Amendments 59, 107 and 264 in this group, all of which are in my name. Like the noble Baroness, Lady Merron, I express gratitude to Full Fact for its advice and support in preparing them.
My noble friend Lord Bethell has just reminded us of the very large degree of discretion that is given to platforms by the legislation in how they respond to information that we might all agree, or might not agree, is harmful, misinformation or disinformation. We all agree that those categories exist. We might disagree about what falls into them, but we all agree that the categories exist, and the discretion given to the providers in how to handle it is large. My amendments do not deal specifically with health-related misinformation or disinformation but are broader.
The first two, Amendments 59 and 107—I am grateful to my noble friend Lord Strathcarron for his support of Amendment 59—try to probe what the Government think platforms should do when harmful material, misinformation and disinformation appear on their platforms. As things stand, the Government require that the platforms should decide what content is not allowed on their platforms; then they should display this in their terms of service; and they should apply a consistent approach in how they manage content that is in breach of their terms of service. The only requirement is for consistency. I have no objection to their being required to behave consistently, but that is the principal requirement.
What Amendments 59 and 107 do—they have similar effects in different parts of the Bill; one directly on the platforms; the other in relation to codes of practice—is require them also to act proportionately. Here, it might be worth articulating briefly the fact that there are two views about platforms and how they respond, both legitimate. One is that some noble Lords may fear that platforms will not respond at all: in other words, they will leave harmful material on their site and will not properly respond.
The other fear, which is what I want to emphasise, is that platforms will be overzealous in removing material, because they will have written their terms of service, as I said on a previous day in Committee, not only for their commercial advantage but also for their legal advantage. They will have wanted to give themselves a wide latitude to remove material, or to close accounts, because that will help cover their backs legally. Of course, once they have granted themselves those powers, the fear is that they will use them overzealously, even in cases where that would be an overreaction. These two amendments seek to oblige the platforms to respond proportionately, to consider alternative approaches to cancellation and removal of accounts and to be obliged to look at those as well.
There are alternative approaches that they could consider. Some companies already set out to promote good information, if you like, and indeed we saw that in the Covid-19 pandemic. My noble friend Lord Bethell said that they did so, and they did so voluntarily. This amendment would not explicitly but implicitly encourage that sort of behaviour as a first resort, rather than cancellation, blocking and removal of material as a first resort. They would still have the powers to cancel, block and remove; it is a question of priority and proportionality.
There are also labels that providers can put on material that they think is dubious, saying, “Be careful before you read this”, or before you retweet it; “This is dubious material”. Those practices should also be encouraged. These amendments are intended to do that, but they are intended, first and foremost, to probe what the Government’s attitude is to this, whether they believe they have any role in giving guidance on this point and how they are going to do so, whether through legislation or in some other way, because many of us would like to know.
Amendment 264, supported by my noble friend Lord Strathcarron and the noble Lord, Lord Clement-Jones, deals with quite a different matter, although it falls under the general category of misinformation and disinformation: the role the Government take directly in seeking to correct misinformation and disinformation on the internet. We know that No. 10 has a unit with this explicit purpose and that during the Covid pandemic it deployed military resources to assist it in doing so. Nothing in this amendment would prevent that continuing; nothing in it is intended to create scare stories in people’s minds about an overweening Government manipulating us. It is intended to bring transparency to that process.
Amendment 264 requires that once a year, within six months of the enactment of the Bill and annually thereafter, the Government would be required to produce a report setting out relevant representations they had made to providers during that previous year. It specifies the relevant representations: trying to persuade platforms to modify their terms of service, to restrict or remove a particular user’s access or to take down, reduce the visibility of or restrict access to content. The Secretary of State would be required to present a new report to Parliament once a year so that we understood what was happening. As I say, it would not inhibit the Government doing it—there may well be good reasons for their doing so—but in this age people feel entitled to know.
Concerns might be expressed that, in doing so, national security might be compromised in some way because of the involvement of the Army or whatever. However, as drafted, this amendment gives the Secretary of State the power, simply if he considers something to be harmful to national security, not to publish it and to withhold it, so I think no national security argument can be made against this. Instead, he would be required to summarise it in a report to the Intelligence and Security Committee of Parliament. It would not enter the public domain. That is a grown-up thing to ask for. I am sustained in that view by the support for the amendment from at least one opposition spokesman.
Those are the two things I am trying to achieve, which in many ways speak for themselves. I hope my noble friend will feel able to support them.
My Lords, I have given notice in this group that I believe Clause 139 should not stand part of the Bill. I want to remove the idea of Ofcom having any kind of advisory committee on misinformation and disinformation, at least as it has been understood. I welcome the fact that the Government have in general steered clear of putting disinformation and misinformation into the Bill, because the whole narrative around it has become politicised and even weaponised, often to delegitimise opinions that do not fit into a narrow set of official opinions or simply to shout abuse at opponents. We all want the truth—if only it was as simple as hiring fact-checkers or setting up a committee.
I am particularly opposed to Amendment 52 from the noble Baroness, Lady Merron, and the noble Lord, Lord Bethell. They have both spoken very eloquently of their concerns, focusing on harmful health misinformation and disinformation. I oppose it because it precisely illustrates my point about the danger of these terms being used as propaganda.
There was an interesting and important investigative report brought out in January this year by Big Brother Watch entitled Inside Whitehall’s Ministry of Truth—How Secretive “Anti-Misinformation” Teams Conducted Mass Political Monitoring. It was rather a dramatic title. We now know that the DCMS had a counter-disinformation unit that had a special relationship with social media companies, and it used to recommend that content was removed. Interestingly, in relation to other groups we have discussed, it used third-party contractors to trawl through Twitter looking for perceived terms of service violations as a reason for content to be removed. This information warfare tactic, as we might call it, was used to target politicians and high-profile journalists who raised doubts or asked awkward questions about the official pandemic response. Dissenting views were reported to No.10 and then often denounced as misinformation, with Ministers pushing social media platforms to remove posts and promote Government-sponsored lines.
It has been revealed that a similar fake news unit was in the Cabinet Office. It got Whitehall departments to attack newspapers for publishing articles that analysed Covid-19 modelling, not because it was accurate—it was not accurate in many instances—but because it feared that any scepticism would affect compliance with the rules. David Davis MP appeared in an internal report on vaccine hesitancy, and his crime was arguing against vaccine passports as discriminatory, which was a valid civil liberties opposition but was characterised as health misinformation. A similar approach was taken to vaccine mandates, which led to tens of thousands of front-line care workers being sacked even though, by the time this happened, the facts were known: the vaccine was absolutely invaluable in protecting individual health, but it did not stop transmission, so there was no need for vaccine mandates to be implemented. The fact that this was not discussed is a real example of misinformation, but we did not have it in the public sphere.
Professor Carl Heneghan’s Spectator article that questioned whether the rule of six was an arbitrary number was also flagged across Whitehall as misinformation, but we now know that the rule of six was arbitrary. Anyone who has read the former Health Secretary Matt Hancock’s WhatsApp messages, which were leaked to the Telegraph and which many of us read with interest, will know that many things posed as “the science” and factual were driven by politics more than anything else. Covid policies were not all based on fact, yet it was others who were accused of misinformation.
Beyond health, the Twitter files leaked by Elon Musk, when he became its new owner, show the dangers of using the terms misinformation and disinformation to pressure big tech platforms into becoming tools of political censorship. In the run-up to the 2020 election, Joe Biden’s presidential campaign team routinely flagged tweets and accounts it wanted to be censored, and we have all seen the screengrab of email exchanges between executives as evidence of that. Twitter suppressed the New York Post’s infamous Hunter Biden laptop exposé on the spurious grounds that it was “planted Russian misinformation”. The Post was even locked out of its own account. It took 18 months for the Washington Post and the New York Times to get hold of, and investigate, Hunter Biden’s emails, and both determined that the New York Post’s original report was indeed legitimate and factually accurate, but it was suppressed as misinformation when it might have made some political difference in an election.
We might say that all is fair in love and war and elections but, to make us think about what we mean by “misinformation” and why it is not so simple, was the Labour Party attack ad that claimed Rishi Sunak did not believe that paedophiles should go to jail fair comment or disinformation, and who decides? I know that Tobias Ellwood MP called for a cross-party inquiry on the issue, calling on social media platforms to do more to combat “malicious political campaigns”. I am not saying that I have a view one way or another on this, but my question is: in that instance, who gets to label information as “malicious” or “fake” or “misinformation”? Who gets the final say? Is it a black and white issue? How can we avoid it becoming partisan?
Yesterday, at the Second Reading of the Illegal Migration Bill, I listened very carefully to the many contributions. Huge numbers of noble Lords continually claimed that all those in the small boats crossing the channel were fleeing war and persecution—fleeing for their lives. Factually that was inaccurate, according to detailed statistics and evidence, yet no one called those contributors “peddlers of misinformation”, because those speaking are considered to be compassionate and on the righteous side of the angels—at least in the case of the most reverend Primate the Archbishop of Canterbury—and, as defined by this House, they were seen to be saying the truth, regardless of the evidence. My point is that it was a political argument, yet here we are focusing on this notion that the public are being duped by misinformation.
What about those who tell children that there are 140 genders to choose from, or that biological sex is immutable? I would say that is dangerous misinformation or disinformation; others would say that me saying that is bigoted. There is at least an argument to be had, but it illustrates that the labelling process will always be contentious, and therefore I have to ask: who is qualified to decide?
A number of amendments in this group put forward a variety of “experts” who should be, for example, on the advisory committee—those who should decide and those who should not—and I want to look at this notion of expertise in truth. For example, in the report by the Communications and Digital Committee in relation to an incident where Facebook marked as “false” a post on Covid by a professor of evidence-based medicine at Oxford University, the committee asked Facebook about the qualifications of those who made that judgment—of the fact-checkers. It was told that they were
“certified by the International Fact-Checking Network”.
Now, you know, who are they? The professor of evidence-based medicine at Oxford University might have a bit more expertise here, and I do not want a Gradgrind version of truth in relation to facts, and so on.
If it were easy to determine the truth, we would be able to wipe out centuries of philosophy, but if we are going to have a committee determining the truth, could we also have some experts in civil liberties—maybe the Free Speech Union, Big Brother Watch, and the Index on Censorship—on a committee to ensure that we do not take down accurate information under the auspices of “misinformation”? Are private tech companies, or professional fact-checkers, or specially selected experts, best placed to judge the reliability of all sorts of information and of the truth, which I would say requires judgement, analysis and competing perspectives?
Too promiscuous a use of the terms “misinformation” and “disinformation” can also cause problems, and often whole swathes of opinion are lumped together. Those who raised civil liberties objections to lockdown where denounced “Covidiots”, conspiracy theorists peddling misinformation and Covid deniers, on a par with those who suggested that the virus was linked to everything from 5G masts to a conscious “plandemic”.
Those who now raise queries about suppressing any reference to vaccine harms, or who are concerned that people who have suffered proven vaccine-related harms are not being shown due support, are often lumped in with those who claim the vaccine was a crime against humanity. All are accused of misinformation, with no nuance and no attempt at distinguishing very different perspectives. Therefore, with such wide-ranging views labelled as “misinformation” as a means of censorship, those good intentions can backfire—and I do believe that there are good intentions behind many of these amendments.
To conclude, banning inaccurate ideas—if they are actually censored as misinformation or disinformation—can push them underground and allow them to fester unchallenged in echo chambers. It can also create martyrs. How often do we hear those who have embraced full-blown conspiracy theories, often peddling cranky and scaremongering theories, say, “They’re trying to silence me because they know that what I’m saying is true. What are they afraid of?” Historically, I think the best solution to bad speech is more speech and more argument; the fullest debate, discussion, scholarship, investigation and research—yes, googling, using Wikipedia or reading the odd book—and, of course, judgment and common sense to figure it out.
We should also remember from our history that what is labelled as false by a minority of people can be invaluable scepticism, challenging a consensus and eventually allowing truth to emerge. The fact—the truth—was once that the world was flat. Luckily, the fact-checkers were not around to ban the minority who challenged that view, and now we know a different truth.
My Lords, I have attached my name to Amendments 52 and 99 in the name of the noble Baroness, Lady Merron, respectively signed by the noble Lords, Lord Bethell and Lord Clement-Jones, and Amendment 222 in her name. I entirely agree with what both the noble Baroness, Lady Merron, and the noble Lord, Lord Bethell, said. The noble Lord in particular gave us a huge amount of very well-evidenced information on the damage done during the Covid pandemic—and continuing to be done—by disinformation and misinformation. I will not repeat what they said about the damage done by the spread of conspiracy theories and anti-vaccination falsehoods and the kind of malicious bots, often driven by state actors, that have caused such damage.
I want to come from a different angle. I think we were—until time prevented it, unfortunately—going to hear from the noble Baroness, Lady Finlay of Llandaff, which would have been a valuable contribution to this debate. Her expert medical perspective would have been very useful. I think that she and I were the only two Members in the Committee who took part in the passage of the Medicines and Medical Devices Act. I think it was before the time of the noble Lord, Lord Bethell—he is shaking his head; I apologise. He took part in that as well. I also want to make reference to discussions and debates I had with him over changes to regulations on medical testing.
The additional point I want to make about disinformation and misinformation—this applies in particular to Amendment 222 about the independence of the advisory committee on disinformation and misinformation—is that we are now seeing in our medical system a huge rise in the number of private actors. These are companies seeking to encourage consumers or patients to take tests outside the NHS system and to get involved in a whole set of private provision. We are seeing a huge amount of advertising of foreign medical provision, given the pressures that our NHS is under. In the UK we have had traditionally, and still have, rules that place severe restrictions on the direct advertising of medicines and medical devices to patients— unlike, for example, the United States, where it is very much open slather, with some disastrous and very visible impacts.
We need to think about the fact that the internet, for better or for worse, is now a part of our medical system. If people feel ill, the first place they go—before they call the NHS, visit their pharmacist or whatever—is very often the internet, through these providers. We need to think about this in the round and as part of the medical system. We need to think about how our entire medical ecology is working, and that is why I believe we need amendments like these.
The noble Baroness makes two incredibly important points. We are seeking to give people greater agency on their own health and the internet has been an enormous bonus in doing that, but of course that environment needs to be curated extremely well. We are also seeking to make use of health tech—non-traditional clinical interventions, some of which do not pierce the skin and therefore fall outside the normal conversation with GPs—and giving people the power to make decisions about the use of these new technologies for themselves. That is why curation of the health information environment is so important. Does the noble Baroness have any reflections on that.
I thank the noble Lord for his intervention. He has made me think of the fact that a particular area where this may be of grave concern is cosmetic procedures, which I think we debated during the passage of the Health and Care Act. These things are all interrelated, and it is important that we see them in an interrelated way as part of what is now the health system.
My Lords, I will speak to a number of amendments in this group. I want to make the point that misinformation and disinformation was probably the issue we struggled with the most in the pre-legislative committee. We recognised the extraordinary harm it did, but also—as the noble Baroness, Lady Fox, said—that there is no one great truth. However, algorithmic spread and the drip, drip, drip of material that is not based on any search criteria or expression of an opinion but simply gives you more of the same, particularly the most shocking, moves very marginal views into the mainstream.
I am concerned that our debates over the last five days have concentrated so much on content, and that the freedom we seek does not take enough account of the way in which companies currently exercise control over the information we see. Correlations such as “Men who like barbecues are also susceptible to conspiracy theories” are then exploited to spread toxic theories that end in real-world harm or political tricks that show, for example, the Democrats as a paedophile group. Only last week I saw a series of pictures, presented as “evidence”, of President Biden caught in a compromising situation that gave truth to that lie. As Maria Ressa, the Nobel Peace Prize winner for her contribution to the freedom of expression, said in her acceptance speech:
“Tech sucked up our personal experiences and data, organized it with artificial intelligence, manipulated us with it, and created behavior at a scale that brought out the worst in humanity”.
That is the background to this set of amendments that we must take seriously.
As the noble Lord, Lord Bethell, said, Amendment 52 will ensure that platforms undertake a health misinformation risk assessment and provide a clear policy on dealing with harmful, false and misleading information. I put it to the Committee that, without this requirement, we will keep the status quo in which clicks are king, not health information.
It is a particular pleasure to support the noble Lord, Lord Moylan, on his Amendments 59 and 107. Like him, I am instinctively against taking material down. There are content-neutral ways of marking or questioning material, offering alternatives and signposting to diverse sources—not only true but diverse. These can break this toxic drip feed for long enough for people to think before they share, post and make personal decisions about the health information that they are receiving.
I am not incredibly thrilled by a committee for every occasion, but since the Bill is silent on the issue of misinformation and disinformation—which clearly will be supercharged by the rise of large language data models—it would be good to give a formal role to this advisory committee, so that it can make a meaningful and formal contribution to Ofcom as it develops not only this code of conduct but all codes of conduct.
Likewise, I am very supportive of Amendment 222, which seeks independence for the chair of the advisory body. I have seen at first hand how a combination of regulatory capture and a very litigious sector with deep pockets slows down progress and transparency. While the independence of the chair should be a given, our collective lived experience would suggest otherwise. This amendment would make that requirement clear.
Finally, and in a way most importantly, Amendment 224 would allow Ofcom to consider after the effect whether the code of conduct is necessary. This strikes a balance between adding to its current workload, which we are trying not to do, and tying one hand behind its back in the future. I would be grateful to hear from the Minister why we would not give Ofcom this option as a reasonable piece of future-proofing, given that this issue will be ever more important as AI creates layers of misinformation and disinformation at scale.
My Lords, I support Amendment 52, tabled by my noble friend Lady Merron. This is an important issue which must be addressed in the Bill if we are to make real progress in making the internet a safer space, not just for children but for vulnerable adults.
We have the opportunity to learn lessons from the pandemic, where misinformation had a devastating impact, spreading rapidly online like the virus and threatening to undermine the vaccine rollout. If the Government had kept their earlier promise to include protection from harmful false health content in their indicative list of harmful content that companies would have been required to address under the now removed adult safety duties, these amendments would not be necessary.
It is naive to think that platforms will behave responsibly. Currently, they are left to their own devices in how they tackle health misinformation, without appropriate regulatory oversight. They can remove it at scale or leave it completely unchecked, as illustrated by Twitter’s decision to stop enforcing its Covid-19 misinformation policies, as other noble Lords have pointed out.
It is not a question of maintaining free speech, as some might argue. It was the most vulnerable groups who suffered from the spread of misinformation online—pregnant women and the BAME community, who had higher illness rates. Studies have shown that, proportionately, more of them died, not just because they were front-line workers but because of rumours spread in the community which resulted in vaccine hesitancy, with devastating consequences. As other noble Lords have pointed out, in 2021 the Royal College of Obstetricians and Gynaecologists found that only 42% of women who had been offered the vaccine accepted it, and in October that year one in five of the most critically ill Covid patients were unvaccinated, pregnant women. That is a heartbreaking statistic.
Unfortunately, it is not just vaccine fears that are spread on the internet. Other harmful theories can affect patients with cancer, mental health issues and sexual health issues, and, most worryingly, can affect children’s health. Rumours and misinformation play on the minds of the most vulnerable. The Government have a duty to protect people, and by accepting this amendment they would go some way to addressing this.
Platforms must undertake a health misinformation risk assessment and have a clear policy on dealing with harmful, false and misleading health information in their terms of service. They have the money and the expertise to do this, and Parliament must insist. As my noble friend Lady Merron said, I do not think that the Minister can say that the false communications offence in Clause 160 will address the problem, as it covers only a user sending a knowingly false communication with the intention of causing harm. The charity Full Fact has stated that this offence will exclude most health misinformation that it monitors online.
My Lords, this has been a very interesting debate. I absolutely agree with what the noble Baroness, Lady Kidron, said right at the beginning of her speech. This was one of the most difficult areas that the Joint Committee had to look at. I am not saying that anything that we said was particularly original. We tried to say that this issue could be partly addressed by greater media literacy, which, no doubt, we will be talking about later today; we talked about transparency of system design, and about better enforcement of service terms and conditions. But things have moved on. Clearly, many of us think that the way that the current Bill is drafted is inadequate. However, the Government did move towards proposing a committee to review misinformation and disinformation. That is welcome, but I believe that these amendments are taking the thinking and actions a step forward.
I do not agree with the noble Baroness, Lady Fox. What she has been saying is really a counsel of despair on not being able to deal with misinformation and disinformation. I was really interested to hear what the noble Lord, Lord Bethell, had to say about his experience —this is pretty difficult stuff to tackle when you are in a position of that sort. I support the noble Baronesses, Lady Bennett and Lady Healy, in what they had to say about this particular aspect. As the noble Baroness, Lady Kidron, said, it is about the system and the amplification that takes place, which brings out the worst in humanity.
The Puttnam report, by the Democracy and Digital Technologies Committee, also raised this. If Lord Puttnam had not retired from this House, he would be here today, saying that we need to do a lot more about this than we are proposing even in the amendments. In the report, the committee talked about a pandemic of misinformation. Nowhere is that more apparent than in health. The report was prescient; it came out in June 2020, some three years ago, well before we heard and saw all kinds of disinformation about vaccines.
We are seeing increasing numbers of commentators talking about the impact of misinformation and disinformation. We have had Ciaran Martin, former head of the National Cyber Security Centre, talking about the dangers to democracy. We have heard Sir Jeremy Fleming, head of GCHQ, saying that the main threat from AI is disinformation. We have had some really powerful statements, quite apart from seeing the impact of disinformation and misinformation on social media platforms.
On these Benches, we believe that the Government have a responsibility to intervene on misinformation and to support legislation to stop the spread of fake news. I believe that the public have an expectation that the Government do that and that the large social media companies address this issue on their platforms, hence my support for the amendments in these groups.
It has to be balanced. That is why I support the amendments by the noble Lord, Lord Moylan, as well. We have a common interest in trying to make sure that, while preventing misinformation and disinformation, we do it in a proportional way, as he described. That is of great importance.
The noble Lord, Lord Bethell, did not quote at length from the letter from Full Fact and all the health professionals, but, notably, it says:
“One key way that we can protect the future of our healthcare system is to ensure that internet companies have clear policies on how they identify the harmful health misinformation that appears on their platforms, as well as consistent approaches in dealing with it”.
It is powerful testimony from some very experienced and senior health professionals.
The focus of many of these amendments is on the way that the advisory committee will operate. Having an independent chair is of great importance, as is having a time limit within which there must be a report, along with other aspects.
The noble Lord, Lord Moylan, referred in one of the amendments to addressing the opacity of existing government methods for tackling disinformation. He mentioned one unit, but there are three units that I have been briefed about. There is the counter-disinformation unit in DCMS, which addresses mainly Covid issues that breach companies’ terms of service, and, recently, Russia/Ukraine issues. Then we have the Government Information Cell, which is based in the FCDO, and the rapid response unit, which I think he referred to, in the Cabinet Office. Ministers referred to these and said that the principal focus of the DCMS unit during the pandemic was Covid et cetera, but we do not know very much about what these units do or what their criteria are. Do they have any relationship with Ofcom? Will they have a relationship with Ofcom? It is important that we have something that reduces that level of opacity and opens up what those units do to a greater degree of scrutiny.
The only direct reference to misinformation in the Bill as it stands is to the advisory committee, so it is important that we know how it fits in with Ofcom’s wider regulatory functions, and that there is a duty to create a code of practice on information and misinformation. The advisory committee should be creative in the way it operates. One of the difficult issues we found is that there is not a great deal of knowledge out there about how to tackle misinformation and disinformation in a systemic way.
Finally, I was very interested in the briefing that noble Lords probably all received from Adobe, which talked about the Content Authenticity Initiative. That is exactly the kind of thing the advisory committee should be exploring. Apparently, it has more than 1,000 members, including media and tech companies, NGOs and so on. Its ambition is to promote the adoption of an open industry standard for content authenticity and provenance. That may sound like the holy grail, but it is something we should be trying to work towards.
These amendments are a means of at least groping towards a better way of tackling misinformation and disinformation, which, as we have heard, can have a huge impact, particularly in health.
My Lords, this debate has demonstrated the diversity of opinion regarding misinformation and disinformation—as the noble Lord said, the Joint Committee gave a lot of thought to this issue—as well as the difficulty of finding the truth of very complex issues while not shutting down legitimate debate. It is therefore important that we legislate in a way that takes a balanced approach to tackling this, keeping people safe online while protecting freedom of expression.
The Government take misinformation and disinformation very seriously. From Covid-19 to Russia’s use of disinformation as a tool in its illegal invasion of Ukraine, it is a pervasive threat, and I pay tribute to the work of my noble friend Lord Bethell and his colleagues in the Department of Health and Social Care during the pandemic to counter the cynical and exploitative forces that sought to undermine the heroic effort to get people vaccinated and to escape from the clutches of Covid-19.
We recognise that misinformation and disinformation come in many forms, and the Bill reflects this. Its focus is rightly on tackling the most egregious, illegal forms of misinformation and disinformation, such as content which amounts to the foreign interference offence or which is harmful to children—for instance, that which intersects with named categories of primary priority or priority content.
That is not the only way in which the Bill seeks to tackle it, however. The new terms of service duties for category 1 services will hold companies to account over how they say they treat misinformation and disinformation on their services. However, the Government are not in the business of telling companies what legal content they can and cannot allow online, and the Bill should not and will not prevent adults accessing legal content. In addition, the Bill will establish an advisory committee on misinformation and disinformation to provide advice to Ofcom on how they should be tackled online. Ofcom will be given the tools to understand how effectively misinformation and disinformation are being addressed by platforms through transparency reports and information-gathering powers.
Amendment 52 from the noble Baroness, Lady Merron, seeks to introduce a new duty on platforms in relation to health misinformation and disinformation for adult users, while Amendments 59 and 107 from my noble friend Lord Moylan aim to introduce new proportionality duties for platforms tackling misinformation and disinformation. The Bill already addresses the most egregious types of misinformation and disinformation in a proportionate way that respects freedom of expression by focusing on misinformation and disinformation that are illegal or harmful to children.
I am curious as to what the Bill says about misinformation and disinformation in relation to children. My understanding of primary priority and priority harms is that they concern issues such as self-harm and pornography, but do they say anything specific about misinformation of the kind we have been discussing and whether children will be protected from it?
I am sorry—I am not sure I follow the noble Baroness’s question.
Twice so far in his reply, the Minister has said that this measure will protect children from misinformation and disinformation. I was just curious because I have not seen any sight of that, either in discussions or in the Bill. I was making a distinction regarding harmful content that we know the shape of—for example, pornography and self-harm, which are not, in themselves, misinformation or disinformation of the kind we are discussing now. It is news to me that children are going to be protected from this, and I am delighted, but I was just checking.
Yes, that is what the measure does—for instance, where it intersects with the named categories of primary priority or priority content in the Bill, although that is not the only way the Bill does it. This will be covered by non-designated content that is harmful to children. As we have said, we will bring forward amendments on Report—which is perhaps why the noble Baroness has not seen them in the material in front of us—regarding material harms to children, and they will provide further detail and clarity.
Returning to the advisory committee that the Bill sets up and the amendments from the noble Baroness, Lady Merron, and my noble friend Lord Moylan, all regulated service providers will be forced to take action against illegal misinformation and disinformation in scope of the Bill. That includes the new false communication offences in the Bill that will capture communications where the sender knows the information to be false but sends it intending to cause harm—for example, hoax cures for a virus such as Covid-19. The noble Baroness is right to say that that is a slightly different approach from the one taken in her amendment, but we think it an appropriate and proportionate response to tackling damaging and illegal misinformation and disinformation. If a platform is likely to be accessed by children, it will have to protect them from encountering misinformation and disinformation content that meets the Bill’s threshold for content that is harmful to children. Again, that is an appropriate and proportionate response.
Turning to the points made by my noble friend Lord Moylan and the noble Baroness, Lady Fox, services will also need to have particular regard to freedom of expression when complying with their safety duties. Ofcom will be required to set out steps that providers can take when complying with their safety duties in the codes of practice, including what is proportionate for different providers and how freedom of expression can be protected.
My noble friend Lord Bethell and the noble Baroness, Lady Merron, are concerned that health misinformation and disinformation will not be adequately covered by this. Their amendment seeks to tackle that but, in doing so, mimics provisions on content harmful to adults previously included in the Bill which the Government consciously removed last year following debates in another place. The Government take concerns about health-related misinformation and disinformation very seriously. Our approach will serve a purpose of transparency and accountability by ensuring that platforms are transparent and accountable to their users about what they will and will not allow on their services.
Under the new terms of service for category 1 services, if certain types of misinformation and disinformation are prohibited in platforms’ terms of service, they will have to remove it. That will include anti-vaccination falsehoods and health-related misinformation and disinformation if it is prohibited in their terms of service. This is an appropriate response which prevents services from arbitrarily removing or restricting legal content, however controversial it may be, or suspending or banning users where it is not in accordance with their expressed terms of service.
The Bill will protect people from the most egregious types of health-related misinformation and disinformation while still protecting freedom of expression and allowing users to ask genuine questions about health-related matters. There are many examples from recent history—Primodos, Thalidomide and others—which point to the need for legitimate debate about health-related matters, sometimes against companies which have deep pockets to defend the status quo.
My noble friend Lord Bethell also raised concerns about the role that algorithms play in pushing content. I reassure him that all companies will face enforcement action if illegal content in scope of the Bill is being promoted to users via algorithms. Ofcom will have a range of powers to assess whether companies are fulfilling their regulatory requirements in relation to the operation of their algorithms.
In circumstances where there is a significant threat to public health, the Bill already provides additional powers for the Secretary of State to require Ofcom to prioritise specified objectives when carrying out its media literacy activity and to require that companies report on the action they are taking to address the threat. The advisory committee on misinformation and disinformation will also be given the flexibility and expertise to consider providing advice to Ofcom on this issue, should it choose to.
Amendments 99 and 222 from the noble Baroness, Lady Merron, and Amendments 223 and 224 from the noble Lord, Lord Knight of Weymouth, relate to the advisory committee. Disinformation is a pervasive and evolving threat. The Government believe that responding to the issue effectively requires a multifaceted, whole-of-society approach. That is what the advisory committee seeks to do by bringing together technology companies, civil society organisations and sector experts to advise Ofcom in building cross-sector understanding and technical knowledge of the challenges and how best to tackle them. The Government see this as an essential part of the Bill’s response to this issue.
I understand the desire of noble Lords to ensure that the committee is conducting its important work as quickly as possible, but it is imperative that Ofcom has the appropriate time and space to appoint the best possible committee and that its independence as a regulator is respected. Ofcom is well versed in setting up statutory committees and ensuring that committees established under statute meet their obligations while maintaining impartiality and integrity. To seek to prescribe timeframes or their composition risks impeding Ofcom’s ability to run a transparent process that finds the most suitable candidates. Considering the evolving nature of disinformation and the online realm, the advisory committee will also need the flexibility to adapt and respond. It would therefore not be appropriate for the Bill to be overly prescriptive about the role of the advisory committee or to mandate the things on which it must report.
The noble Baroness, Lady Fox of Buckley, asked whether the committee could include civil liberties representatives. It is for Ofcom to decide who is on the committee, but Ofcom must have regard to the desirability of including, among others, people representing the interests of UK users of regulated services, which could include civil liberties groups.
The noble Baroness, Lady Kidron, raised the challenges of artificial intelligence. Anything created by artificial intelligence and shared on an in-scope service by a user will qualify as user-generated content. It would therefore be covered by the Bill’s safety duties, including to protect children from harmful misinformation and disinformation, and to ensure that platforms properly enforce their terms of service for adults.
I turn to the points raised in my noble friend Lord Moylan’s Amendment 264. Alongside this strong legislative response, the Government will continue their operational response to tackling misinformation and disinformation. As part of this work, the Government meet social media companies on a regular basis to discuss a range of issues. These meetings are conducted in the same way that the Government would engage with any other external party, and in accordance with the well-established transparency processes and requirements.
The Government’s operational work also seeks to understand misinformation and disinformation narratives that are harmful to the UK, to build an assessment of their risk and threat. We uphold the same commitment to freedom of expression in our operational response as we do in our legislative response. As I said, we are not in the business of telling companies what legal content they can and cannot allow. Indeed, under the Bill, category 1 services must set clear terms of service that are easy for users to understand and are consistently enforced, ensuring new levels of transparency and accountability.
Our operational response will accompany our legislative response. The measures have been designed to provide a strong response to tackle misinformation and disinformation, ensuring users’ safety while promoting a thriving and lively democracy where freedom of expression is protected.
The noble Baroness, Lady Fox, and the noble Lord, Lord Clement-Jones, asked about the counter-disinformation unit run, or rather led, by the Department for Science, Innovation and Technology. That works to understand attempts to artificially manipulate the information environment, and to understand the scope, scale and reach of misinformation and disinformation. It responds to acute information incidents, such as Russian information operations during the war in Ukraine, those we saw during the pandemic and those around important events such as general elections. It does not monitor individuals; rather, its focus is on helping the Government understand online misinformation and disinformation narratives and threats.
When harmful narratives are identified, the unit works with departments across Whitehall to deploy the appropriate response, which could involve a direct rebuttal on social media or awareness-raising campaigns to promote the facts. Therefore, the primary purpose is not to monitor for harmful content to flag to social media companies—the noble Baroness raised this point—but the department may notify the relevant platform if, in the course of its work, it identifies content that potentially violates platforms’ terms of service, including co-ordinated, inauthentic or manipulative behaviour. It is then up to the platform to decide whether to take action against the content, based on its own assessment and terms of service.
The Minister mentioned “acute” examples of misinformation and used the example of the pandemic. I tried to illustrate that perhaps, with hindsight, what were seen as acute examples of misinformation turned out to be rather more accurate than we were led to believe at the time. So my concern is that there is already an atmosphere of scepticism about official opinion, which is not the same as misinformation, as it is sometimes presented. I used the American example of the Hunter Biden laptop so we could take a step away.
This might be an appropriate moment for me to say—on the back of that—that, although my noble friend explained current government practice, he has not addressed my point on why there should not be an annual report to Parliament that describes what government has done on these various fronts. If the Government regularly meet newspaper publishers to discuss the quality of information in their newspapers, I for one would have entire confidence that the Government were doing so in the public interest, but I would still quite like—I think the Government would agree on this—a report on what was happening, making an exception for national security. That would still be a good thing to do. Will my noble friend explain why we cannot be told?
While I am happy to elaborate on the work of the counter-disinformation unit in the way I just have, the Government cannot share operational details about its work, as that would give malign actors insight into the scope and scale of our capabilities. As my noble friend notes, this is not in the public interest. Moreover, reporting representations made to platforms by the unit would also be unnecessary as this would overlook both the existing processes that govern engagements with external parties and the new protections that are introduced through the Bill.
In the first intervention, the noble Baroness, Lady Fox, gave a number of examples, some of which are debatable, contestable facts. Companies may well choose to keep them on their platforms within their terms of service. We have also seen deliberate misinformation and disinformation during the pandemic, including from foreign actors promoting more harmful disinformation. It is right that we take action against this.
I hope that I have given noble Lords some reassurance on the points raised about the amendments in this group. I invite them not to press the amendments.
My Lords, I am most grateful to noble Lords across the Committee for their consideration and for their contributions in this important area. As the noble Baroness, Lady Kidron, and the noble Lord, Lord Clement-Jones, both said, this was an area of struggle for the Joint Committee. The debate today shows exactly why that is so, but it is a struggle worth having.
The noble Lord, Lord Bethell, talked about there being a gap in the Bill as it stands. The amendments include the introduction of risk assessments and transparency and, fundamentally, explaining things in a way that people can actually understand. These are all tried and tested methods and can serve only to improve the Bill.
I am grateful to the Minister for his response and consideration of the amendments. I want to take us back to the words of the noble Baroness, Lady Kidron. She explained it beautifully—partly in response to the comments from the noble Baroness, Lady Fox. This is about tackling a system of amplification of misinformation and disinformation that moves the most marginal of views into the mainstream. It deals with restricting the damage that, as I said earlier, can produce the most dire circumstances. Amplification is the consideration that these amendments seek to tackle.
I am grateful to the noble Lord, Lord Moylan, for his comments, as well as for his amendments. I am sure the noble Lord has reflected that some of the previous amendments he brought before the House somewhat put the proverbial cat among the Committee pigeons. On this occasion, I think the noble Lord has nicely aligned the cats and the pigeons. He has managed to rally us all—with the exception of the Minister—behind these amendments.
The noble Baroness is entirely right to emphasise amplification. May I put into the mix the very important role of the commercialisation of health misinformation? The more you look at the issue of health misinformation, the more you realise that its adverse element is to do with making money out of people’s fears. I agree with the noble Baroness, Lady Fox, that there should be a really healthy discussion about the efficacy, safety and value for money of modern medicines. That debate is worth having. The Minister rightly pointed out some recent health scandals that should have been chased down much more. The commercialisation of people’s fears bears further scrutiny and is currently a gap in the Bill.
I certainly agree with the noble Lord, Lord Bethell, on that point. It is absolutely right to talk about the danger of commercialisation and how it is such a driver of misinformation and disinformation; I thank him for drawing that to the Committee’s attention. I also thank my noble friend Lady Healy for her remarks, and her reflection that these amendments are not a question of restricting free speech and debate; they are actually about supporting free speech and debate but in a safe and managed way.
The Minister gave the Committee the assurance that the Bill in its current form tackles the most egregious forms of disinformation and misinformation. If only it were so, we would not have had cause to bring forward these amendments. I again refer to the point in the Minister’s response when, as I anticipated, he referred to the false communications offence in Clause 160. I repeat the point gently but firmly to the Minister that this just does not address the amplification point that we seek to focus on. One might argue that perhaps it is more liberal and proportionate to allow misinformation and disinformation but to focus on tackling their amplification. That is where our efforts should be.
With those comments, with thanks to the Minister and other noble Lords, and in the hope that the Minister will have the opportunity to reflect on the points raised in this debate, I beg leave to withdraw.
Amendment 52 withdrawn.