I beg to move,
That this House
has considered forecasting and modelling during covid-19.
It is a pleasure to speak under your chairmanship, Sir Edward. I speak not to bury science, but to praise it. During the covid pandemic, there has been some remarkable, wonderful science; I just question to what extent that includes the modelling and forecasts that have come from it. Thanks to some questionable modelling that was poorly presented and often misrepresented, never before has so much harm been done to so many by so few based on so little questionable and potentially flawed data.
I believe that the use of modelling is pretty much getting to be a national scandal. That is not just the fault of the modellers; it is how their work was interpreted by public health officials and the media—and yes, by politicians, including the Government, sadly. Modelling and forecasts were the ammunition that drove lockdown and created a climate of manipulated fear. I believe that that creation of fear was pretty despicable and unforgivable. I do not doubt that modelling is important or that there has been some good modelling, but too often it has been drowned out by hysterical forecasts. I am not, as Professor Ferguson implied, one of those with an “axe to grind”. I do, however, care about truth and believe that if someone influences policy, as the modellers and Imperial College London have done, they should be questioned. Frankly, they have not been questioned enough.
Above all, I want to understand why Government, parts of the media and the public health establishment became addicted to these doomsday scenarios, and then normalised them in our country with such depressing and upsetting consequences for many. I do not pretend to be an expert; I am not. I defended my own PhD at the end of last year, but it is not in epidemiology and I do not pretend to be particularly knowledgeable about that. But depending on time—I know others want to come in as well—I will quote from 13 academic papers and 22 articles authored by a total of approximately 100 academics.
This is a story of three scandals, and the first one took place 21 years ago. In 2001, we faced the foot and mouth emergency. We reacted drastically by slaughtering and burning millions of animals, and farmer suicides and bankruptcies followed. That policy was allegedly heavily influenced by Imperial College modelling and Professor Ferguson. Since foot and mouth, two peer-reviewed studies examined the method behind that particular madness. I quote from them now to show there are practical and ethical questions over modelling going back two decades.
In a 2006 paper, and I apologise for these wordy, long titles, titled “Use and abuse of mathematical models: an illustration from the 2001 foot and mouth disease epidemic in the United Kingdom”—they are not that catchy—the authors confirmed that Ferguson’s model
“probably had the most influence on early policy decisions” and
“specifically, the introduction of the pre-emptive contiguous culling policy”.
That is the mass slaughter of animals near infected farms. The authors said that the consequences were “severe” and
“the models were not fit for the purpose of predicting the course of the epidemic”
—not a good start. They remain “unvalidated”. Their use was “imprudent” and amounted to
“the abuse of predictive models”.
Devastatingly, the authors wrote
“The UK experience provides a salutary warning of how models can be abused in the interests of scientific opportunism.”
It is difficult to find a more damning criticism of one group of scientists by another.
A 2011 paper, “Destructive tension: mathematics versus experience—the progress and control of the 2001 foot and mouth disease epidemic in Great Britain”—bit of a mouthful—by four academics said the models that supported the culling policy were “severely flawed” and based on flawed data with “highly improbable biological assumptions”. The models were
“at best, crude estimations that could not differentiate risk”.
That is not a very good “at best”. At worst, they were “inaccurate representations”.
Sadly, the paper said, impatience for results
“provided the opportunity for self-styled ‘experts’, including some veterinarians, biologists and mathematicians, to publicise unproven novel options.”
Some of the scientific work—some of it modelling, some of it not, with some modelling by Ferguson and some not—was cited as “unvalidated” and “severely flawed”, with inaccurate data on “highly improbable assumptions” leading to “scientific opportunism”. Is anybody reminded of anything more recent that would suggest the same?
I scroll forward 20 years. As with foot and mouth, with covid we had a nervous Government presented with doomsday scenarios by Imperial—the 500,000 dead prediction—that panicked them into a course of profound action with shocking outcomes. After the lockdown had gone ahead, Imperial publicised on
That work is now being challenged. Because of time, I will quote only a small selection. In a paper entitled, “The effect of interventions on COVID-19”, 13 Swedish academics—Ferguson ain’t popular in Sweden, I can tell Members that much—said that the conclusions of the Imperial study were not justified and went beyond the data. Regensburg and Leibniz university academics directly refuted Imperial College in a paper entitled “The illusory effects of non-pharmaceutical interventions on COVID-19 in Europe”, which said that the authors of the Imperial study
“allege that non-pharmaceutical interventions imposed by 11 European countries saved millions of lives. We show that their methods involve circular reasoning. The purported effects are pure artefacts, which contradict the data. Moreover, we demonstrate that the United Kingdom’s lockdown was both superfluous and ineffective.”
I am not saying that this stuff is right; I am just saying that there is a growing body of work that is, frankly, taking apart Imperial’s. Remember, we spent £370 billion on lockdown that we will never get back. I could continue with other quotes, but I think Members get the flavour.
Moreover, a substantial number of other papers now question not Imperial per se but the worth generally of lockdowns. A pre-print article by four authors, “Effects of non-pharmaceutical interventions on COVID-19: A Tale of Three Models”, said:
“Claimed benefits of lockdown appear grossly exaggerated.”
In another paper, three authors found no clear, significant benefit of lockdowns on case growth in any country. Other papers continue that theme. I will quote one more, on adults living with kids. Remember: we shut schools because we were scared that kids would come home and infect older people, who would then die. This paper, in The BMJ, found
“no evidence of an increased risk of severe COVID-19 outcomes.”
We shut down society and schools just in case, doing extraordinary harm to people’s lives, especially young people. I am not a lockdown sceptic, as Ferguson casually describes some of his critics, but I am becoming so. Do you know why, Sir Edward? Because I read the evidence, and there is a growing body of it. In fact, there is one quote that I did not read out. There was a study of lots of countries that had lockdowns and lots that did not, and the data was inconclusive.
The third element of the scandal is the recent modelling. Swedish epidemiologists looked at Imperial’s work and compared it with their own experience. Chief epidemiologist Anders Tegnell said of Imperial’s work that
“the variables…were quite extreme…We were always quite doubtful”.
Former chief epidemiologist Johan Giesecke said Ferguson’s model was “almost hysterical”. In the House of Lords, Viscount Ridley talked of a huge discrepancy and flaws in the model and the modelling. John Ioannidis from Stanford University said that the “assumptions and estimates” seemed “substantially inflated”.
There was a second example last summer. In July 2021, the good Professor Ferguson predicted that hitting 100,000 cases was “almost inevitable”. He told the BBC that the real question was whether we got to double that or even higher. That is where the crystal ball starts to fail: we got nowhere near 200,000, and we got nowhere near 100,000. There was nothing inevitable about Professor Ferguson’s inevitability, and his crystal ball must have gone missing from the start. In The Times, he blamed the Euros for messing up his modelling because—shock horror—people went to pubs a lot to watch the games during the competition. When the tournament finished—shock horror—they did not. That seems to be the fundamental problem: where reality comes up against models, reality steamrollers them because models cannot cope with the complexity of real life. To pretend that they can and even that they are useful, when so many of them have proved not to be, is concerning.
Ferguson is only one of many people in Independent SAGE especially, but also SAGE, who did not cover themselves in glory. Raghib Ali—a friend of my hon. Friend Mr Baker, who I am delighted is present—is one of the heroes of covid. He noted that many left-wing SAGE members
“repeatedly made inaccurate forecasts overestimating infections”.
Very often, they were falsely described on the BBC.
I am grateful to my hon. Friend for mentioning my friend and constituent Raghib Ali, who has indeed been one of the absolute heroes of this pandemic—not only in his advice to us all, including online, but through his service in hospitals. I hope my hon. Friend will not mind my saying that I do not think any of us can speak for Raghib about his opinion of modelling, and I know my hon. Friend is not trying to.
I quite agree, and I thank my hon. Friend for that, but I am deeply grateful to Raghib and other people for speaking out. Just for the record, the communist Susan Michie, who is quoted quite often by the BBC, is not a medical doctor, a virologist or an epidemiologist. She is a health psychologist, so why on earth is she talking about epidemiology?
The third scandal took place this winter. Imperial, the London School of Hygiene and Tropical Medicine and others—I think they included Warwick—predicted 5,000 daily covid deaths, with 3,000 daily deaths as the best-case scenario. They were hopelessly inaccurate, and at this point the tide really begins to turn. Dr Clive Dix, a former vaccine taskforce head, said:
“It’s bad science, and I think they’re being irresponsible. They have a duty to reflect the true risks, but this just headline grabbing.”
As I say, the tide is turning. Oncology professor Angus Dalgleish describes Ferguson’s modelling as “lurid predictions” and “spectacularly wrong”. The great Carl Heneghan, another scientist who has emerged with great credit for his honesty and fairness of comment, says:
“it’s becoming clearer all that ministers see is the worst-case scenario”.
Professor Brendan Wren says:
“Dodgy data and flawed forecasts have become the hallmarks of much of the scientific establishment 2”— what a damning quote!—
“which has traded almost exclusively in worst-case scenarios...this must stop now.”
I will wind up in the next two to three minutes—I will speak for no longer than 15 minutes because other people wish to get in, and I am always mindful of that. What is the result of all this? The result, as UCL’s Professor Francois Balloux notes, is a
“loss of trust in government and public institutions for crying wolf.”
That is just it. We have had hysterical forecasts, models taken out of context, and worst-case scenarios normalised.
In the Army, there is something called the most dangerous course of action, and there is something called the most likely course of action. To sum up in one sentence how we got this wrong, we have effectively taken the most dangerous course of action and collectively—the politicians, media, scientists and health professionals—presented that as the most likely course of action, but it was not. Why did politicians say, “Follow the science” as a way of shutting down debate, when we know that science is complex and that our outputs are dependent on our inputs? It was down to public-health types, whose defensive decision making would only ever cost other people’s jobs, other people’s health, other people’s sanity, other people’s education and other people’s life chances.
We know that the Opposition supported lockdown from the word go, but a little more opposing might have been helpful. The BBC and the Guardian have been salivating at state control and doomsday scenarios. Against this tsunami of hysteria and fear, thank God for The Spectator, The Telegraph and, yes, the Daily Mail for keeping alive freedom of speech and putting forward an alternative, which is now being increasingly scientifically vindicated. I accept that lockdown was understandable at first—I get that—but I believe the continuation of lockdown after that first summer was an increasingly flawed decision.
In wrapping up, I have some questions. To Professor Ferguson and the doomsday modellers: why are so many of your fellow academics disputing your work and your findings? To the BBC, as our state broadcaster: why did you so rarely challenge Ferguson, SAGE or Independent SAGE? Why did we misrepresent experts, and why did the BBC allow itself to become the propaganda arm of the lockdown state? To the Government: how could we have been so blinkered that we thought that following the science meant shutting down scientific debate? Why did we never use other datasets in contexts with the British people, or even in contexts in which these profound and enormous decisions were made? Why did we think that it was in our nation’s interests to create a grotesque sense of fear to manipulate behaviour? SAGE and Independent SAGE kept on getting it wrong. To the public health types, I quote from Professor Angus Dalgleish again:
“Flailing around, wallowing in hysteria, adopting impractical policies and subverting democracy, the Chief Medical Officer is out of his depth. He has to go if we are ever to escape this nightmare.”
He is not a journalist; he is an oncologist—a senior oncologist.
Twice in 20 years, we have made some pretty profound and significant errors of judgment, using modelling as a sort of glorified guesswork. I suggest respectfully to the Government that, after foot and mouth and covid, never again should they rely on dubious modelling, regardless of the source and regardless of the best intent. I am sure that Imperial and all these other people do the best that they can, and am very happy to state that publicly. However, why has so much of their work been described—and I will use the words of other academics—as “unvalidated”, “flawed”, “not fit for purpose”, “improbable”, “almost hysterical”, “overconfident”, “lurid”, “inflated”, “pessimistic”, “spectacularly wrong”, “fraudulent” and as “scientific opportunism”?
Thank you very much, Sir Edward. I begin by referring to the declarations that I have made in connection to the Covid Recovery Group.
I am a professional aerospace and software engineer—at least I was in my former life. I have an MSc in computer science, and am very interested in models. However, there is an old joke among engineers, which derives from a “Dilbert” cartoon, that the career goal of every engineer is not to be blamed for a major catastrophe. I wonder whether that spirit infuses not only expert advice but modelling in particular. We are all indebted to The Spectator for its data hub, which shows how data has worked out against models. As anyone can see by going to data.spectator.co.uk, it is the same story again and again: enormous great molehills of death projections, and underneath them the reality of much lower lines. I will leave it to people to look for themselves at the data, rather than trying to characterise the curves for Hansard.
There is a great deal to be done in terms of institutional reform of the way in which modelling is done and informs public policy. That is a very old problem; I found a great article in Foreign Affairs that goes back a long time, to the post-war era, about this problem. It is time we did something about it, through institutional reform. The situation is now perfectly plain: under the Public Health (Control of Disease) Act 1984, even our most basic liberties can be taken away with a stroke of a pen if a Minister has been shown sufficiently persuasive modelling—not even data—that tells them that there is trouble ahead.
I have put this on the record before, and I hope that my right hon. Friend the Prime Minister will not mind. Before we went into the 2020 lockdown, he called me; I was amazed to be at home and to have the Prime Minister of the UK call me. “Steve, I have been shown death projections—4,500 people a day and the hospitals overwhelmed.” I gave him two pieces of advice: “First, if you really believe that we are going to have 4,500 people a day die, you’d better do whatever it takes to prevent that from happening,” which is not advice that anyone would have expected me to give, but that is what I said, and, “Secondly, for goodness’ sake, go and challenge the advice—the data.”
That is why Carl Heneghan, Raghib Ali, Tim Spector and I, whether in person or virtually, were seen in Downing Street, and were there to challenge the data. By Monday, Carl Heneghan had taken the wheels off those death projections, by which the Prime Minister had, disgracefully, been bounced, using a leak, into the lockdown. That is absolutely no way to conduct public policy. However, the reason someone—we will not speculate who—bounced the Prime Minister is that they had been shown those terrifying death projections, which could not possibly be tolerated. Those projections were wrong.
It is monstrous that millions of people have been locked down—effectively under house arrest—have had their businesses destroyed and have had their children prevented from getting an education. Any of us who visit nursery schools meet children, two-year-olds, who have barely socialised. We cannot even begin to understand the effects on the rest of their lives. It is not the modellers’ fault, and I do not wish to condemn modellers. They are technical people, doing a job they are asked to do. We have to ask them to do a different and better job—one which does not leave them, like the old joke about engineers, afraid of being responsible for a major catastrophe.
As my friend Professor Roger Koppl said in his book “Expert Failure”, experts have all the incentives to be pessimistic because if they are pessimistic and events turn out better, they are not blamed. I am sorry: I am not blaming them personally, but I am blaming the whole system for allowing this to arise. The extraordinarily pessimistic models plus the bouncing of a Prime Minister did so much harm.
We need to conduct institutional reform. In relation to models, Mike Hearn, a very senior software engineer, has published a paper available on my website. It is a summary of methodological issues in epidemiology. There are about seven points—an extraordinary set of arguments: things such as poor characterisation, statistical uncertainty and so on, which I have no time to get into. The fundamental point is that we must now have an office of research integrity. The job of that office would be to demand—to insist—that the assumptions going into models and the nature of the models themselves were of a far higher quality.
Finally, to go back to an area of my own expertise, I encourage any software engineer to look at the model code that was released.
I think it should be in the Cabinet Office, because we see that scientific advice applies right across Government.
The code quality of the model that was released was really not fit for a hobbyist. The irony is that the universities that do modelling will overwhelmingly have computer science departments. For goodness’ sake, I say to modellers, go and talk to software engineers and produce good quality code. For goodness’ sake, stop using C++. People are using, as they so often do, the fastest computer programming language, but also the most sophisticated and dangerous. As a professional software engineer, the first thing I would say is, “Don’t use C++ if you don’t have to. Models don’t need to; they can run a bit slower. Use something where you can’t make the desperately poor quality coding errors that were in that released model code”. That is really inexcusable and fulfils all the prejudices of software engineers against scientists hacking out poor quality code not fit for hobbyists. As I think people can tell, I feel quite strongly about that, precisely because these poor modellers have had unacceptable burdens placed on them. All the incentives for them to be pessimistic can now be seen in the data. This all has to be changed with an office of research integrity.
I will try to be brief, Sir Edward. Bob Seely has raised a first-class, crucial issue. Clearly we cannot predict the future, but we can prepare for it. Traditional crisis management and risk management list the possible things that might happen and look at the severity and the likelihood, and based on that we produce a number of models. That is the old way of doing it.
The modern way of doing it, instead of creating models, is to create a playbook—a mechanism by which we can look at all the challenges that might face a country and at least put in place a mechanism for dealing with the crisis, whatever it is. Bizarrely, the US have taken that approach but, perhaps rather sadly and tragically, never used the playbook model that might have saved them.
We very much went down the model route and took out of our kit bag the one we had for flu. It was too rigid, did not fit and was too slow. The assumptions were not challenged; the real world evidence was not incorporated; and, worse, we limited how we looked at modelling absolutely to covid. We looked at the mortality of covid and the impact; we looked at the morbidity of covid and the impact, but we should have looked instead and as well at the impact of the crisis as a whole and the proposed solutions, including the lockdown and other restrictive measures, across the country, across society and across the economy.
We were told at the time—the Government were challenged on this—that it was too hard: we could not possibly do any modelling with regard to the mortality of lockdown and restrictions, the morbidity of lockdown and restrictions, or the economic impact. That was sadly relegated to second order, but we should never forget that there is a huge interaction between wealth and health.
The interconnectedness of the impacts of the steps that were taken was totally ignored. They were unexplored and unexplained for a good 12 months. So only today, as my hon. Friend the Member for Isle of Wight has articulated so clearly and eloquently, are we looking retrospectively at what happened. At last we are looking at the impact, not just on covid patients and those vulnerable to covid, but on those patients not subject to high risk for covid. We are looking at the impact on children’s life chances—not just on their education—as it is far more serious than just their education. We are beginning to look at the impact on society and communities and—at last—the economy. As my hon. Friend said, never again must we be faced with the question, “Did we unknowingly and unintentionally do more harm than good?”
Forecasting and modelling have a valuable place, but we must never forget that they are tools. Advisers advise; Ministers must ask for the right advice, the right variety of advice, and then decide. Never again should we hear, “We will just follow the science.”
I congratulate my hon. Friend Bob Seely on securing this very important debate and making an excellent speech. I have no wish to repeat the brilliant research that he recited, but he did highlight the repeated failures of modelling throughout the pandemic, not just the modelling but how it is being used. The models have not been out by just a few per cent, as he said, but often by orders of magnitude. The way that the models have been used has had life-changing impacts on people across the country.
Before I was a politician, I was a science teacher. One of the joys of teaching science to teenagers is conducting practical experiments in the lab. Once the teacher has ensured that they are not going to burn down the lab, it is important to teach them how to conduct an experiment properly and write it up. The first thing is to create a hypothesis. They must write a statement of what they think will happen and why, using the scientific knowledge they have and some assumptions, then carry out the experiment, write up the research and, crucially, evaluate. They must look at the hypothesis and at what they have observed, and decide whether they match. If they do match, they go back to their assumptions and see why they were correct. If they do not match, if what has happened in the lab and been recorded does not match the hypothesis, they need to ask why—“What assumptions did I make that did not bear out in real life, that did not happen in the lab?”
It seems to me that those are the questions that have not been asked throughout this crisis. Perhaps we can understand why assumptions had to be made quickly the first time, for the first lockdown—assumptions that turned out not to be true. My hon. Friend said that perhaps we are repeating history of 20 years ago, and that there is not that excuse. However, during subsequent waves and restrictions, why were those assumptions not questioned? There were assumptions about how likely the different scenarios were, about people’s behaviour and fatality rates.
Even in December, when plan B was voted through, some of the assumptions could have been declared wrong in real time—the assumption that omicron was as severe as delta, and that the disease would escape the vaccine. Some of the figures were almost plucked out of the air and given no likelihood. Those assumptions should have been challenged earlier and we need to ask why.
I picked up on one assumption following an interview with Dr Pieter Streicher, a South African doctor. He suggested that SAGE models have always assumed that infection rates do not reach a peak until about 70% of the population have had the disease, whereas the real-world data suggest that the infection rates start to slow at around 30% of the population. That makes more sense from a social science point of view, because we know that people are not equally sociable.
Studies by sociologists such as Malcolm Gladwell, who wrote the best-selling “The Tipping Point”, describe the law of the few, where very few people are extremely sociable and pass on a virus, idea or whatever, to many people. Many more people do not socialise as much and are not as good at transmitting. Perhaps we should have looked a lot more at social science, at behaviour and people’s interactions, rather than pure virology and what might happen in a lab. Of course, we do not exist in labs and cannot model the interactions of human beings that easily.
The tragedy is that this was not a paper exercise. This is not an experiment that happened in a lab where one can go back and repeat until valid results are achieved. These models, and particularly the weight they have been given, have caused serious destruction of lives and livelihoods. Who was modelling the outcomes for education, child abuse and poverty? Who was modelling the impact on loneliness, despair and fear? We have to ask why those assumptions were not interrogated.
My hon. Friend Mr Baker has made some excellent points about the need for institutional reform. I completely agree with him, but we also need to look at the impact on free speech. At the beginning of this crisis, the mainstream media took on the idea that lockdown was the only strategy.
My hon. Friend spoke earlier about the repeatability of scientific experiments with hypotheses. One of the reasons I talked about C++ is that by using multithreading, it is possible to end up with code that does not produce repeatable outputs. Does she agree that it is very important that when models are run, they produce consistent and coherent outputs that can be repeated?
I absolutely agree with my hon. Friend. I would have said to my students, “It is not a valid experiment if you cannot follow the same method, repeat the experiment and produce the same results. It is completely invalid if you cannot do that.” I am not a software engineer, so I take my hon. Friend’s word for it when it comes to the use of programming languages, but he is absolutely correct that the whole experiment is not valid if the results cannot be repeated.
Over just the past few months, there has been an opening up of debate that has moved from The Spectator into mainstream media, where people such as my hon. Friends present have been able to speak more freely about the problems and costs of lockdown, and have not suffered so much criticism—I hesitate to say “abuse”—in the media and on social media. To avoid this happening again, we need institutional change, but we also need to understand that these are not black-and-white issues. It is good, right and wise to question the data and the science, and to put just as much weight on people’s quality of life—the things that make life living—as on the number of people in hospital at one time for a particular disease.
It is a pleasure to see you in the Chair, Sir Edward, and to follow all my hon. Friends, who I note have usually been in a different Lobby from me on most coronavirus measures. I am sure the Minister will be grateful to have somebody speaking from the Government Benches who has been supporting the Government on coronavirus throughout.
However, I too have issues with modelling, which is why I chose to speak in today’s debate. I have more sympathy with modelling, and I will be offering some sort of partial defence and explanation of it in my remarks, because before I was an MP, I was a modeller myself—a software engineer. I wrote in Visual Basic.NET, which is nice and simple: engineers can see what the code does. I worked for bet365, and I used to write models that worked out the chance of somebody winning a tennis match, a team winning a baseball game, or whatever. I had some advantages that Neil Ferguson and these models do not have, in that there are many tennis matches, and I could repeat the model again and again and calibrate it. If I got my model wrong, there were people out there who would tell me that it was wrong by beating me and winning money off me, so my models got better and better.
The problem we have with covid is that we cannot repeat that exercise—there is no counterfactual. We have heard the phrase “marking your own homework”.
I am deeply impressed by all this stuff— I do not quite understand what my hon. Friends are talking about, but it sounds fantastic. However, there is a counterfactual. The counterfactual is when people say, “We are not going to follow the lockdown,” and hey presto! we do not get 3,000 or 5,000 deaths a day and all the people who predicted that are proved wrong. There is a counterfactual called real life.
I thank my hon. Friend for his point, and I accept it, but the problem is that none of these models model changes in human behaviour. We discussed this issue during our debate on the measures that we brought in before Christmas, and as I said at the time, the reality was that people were not going to the pub, the supermarket or anything because they were changing their behaviour in the face of the virus. If the models do not take that into account, they cannot know where the peak will be. The models show what would happen if nobody changed their behaviour at all, but of course, the reality is that people do. We have not got good enough at modelling that, because we do not know exactly how people change their behaviour.
As a tangential point, behavioural science has had a really bad pandemic. We were told that people would not stand for lockdowns, but—to the chagrin, I am sure, of many of my hon. Friends—people did stand for them. Looking at the polling, they were incredibly popular: they were incredibly damaging, as colleagues have said, but people were prepared to live with lockdowns for longer than the scientists thought they would. There was initially an attempt to time the lockdown, because people would not last for that long. In reality, that is not what happened, so behavioural science also has a lot to answer for as a result of the pandemic.
I think that models still have value. My biggest concern arising from the experience of the pandemic is the bad parameters that have gone into those models at times—I will refer to two particular examples.
The time when I was nearest to following my colleagues into the Lobby was the extension to freedom day in June, because on that day we had a session of the Science and Technology Committee, which has taken excellent evidence throughout; it has a session on reproducibility in science tomorrow, where we will also look at this sort of thing. On the day of that vote, I was questioning Susan Hopkins and we were considering vaccine effectiveness. Public Health England had just produced figures showing that the actual effectiveness against hospitalisation of the Pfizer vaccine was 96%, yet the model that we were being asked to rely on for the vote that day said it was 89%. Now, 89 to 96 may not sound like a huge difference, but it is the difference between 4% of people going to hospital and 11%, which is three times higher. It was ludicrous that that data was available on that day but had not yet been plugged into the models. As I said to my hon. Friend Miriam Cates, that was one of the reasons that I said in the Chamber that the case was getting weaker and weaker, and that if the Government tried to push it back any further, I would join my colleagues in the Lobby on the next occasion.
The other case is with omicron. Just before Christmas, we had these models that basically assumed that omicron was as severe as delta. We already had some evidence from South Africa that it was not, and since then we have discovered that it was even better than we thought. That feeds into what my hon. Friend was saying about the total number of people who are susceptible. The fact that omicron has peaked early is not because people have changed their behaviour but because the susceptible population was not as big as we thought: more people had been exposed, more people have had asymptomatic disease. There are all those sorts of problems there.
More philosophically, my models when I worked for a bookmaker were about probabilities. Too often we focus on a single line and too often that has been the so-called worst-case scenario. Well, the worst-case scenario is very black indeed at all times, but Governments cannot work purely on a worst-case scenario; they have to come up with a reasonable percentile to work with, whether it is 95% or 90%. Obviously, it must be tempered by how bad the scenario would be for the country. The precautionary principle is important and we should take measures to protect against scenarios that have only a 5% chance of happening or indeed a 2% chance, but we should do that only if the insurance price that we pay––the premium for doing that––is worth paying. That comes down to the fact that not many economic models have been plugged in, as my hon. Friend Mr Baker has repeatedly said in the Chamber and elsewhere throughout.
Any Government must try to predict the course of a pandemic to make sensible plans and I believe that the best tool for that is still modelling, but we must learn the lessons of this pandemic. We must learn from shortcomings such as the failure to understand human behaviour properly, the failure to make code open source so that other people can interrogate a model and change the parameters, and the failure to enter the right parameters and update the model at the moment politicians are being asked to vote on it. For all those reasons, I am grateful for today’s debate and look forward to hearing the Opposition spokespeople and the Minister. I thank my hon. Friend the Member for Wycombe for today’s debate.
Thank you for calling me, Sir Edward. My first thought is, thank goodness that health is devolved. It will surprise no one to learn that I will not be joining the libertarian pile-on against scientists led by people who, even in these circumstances in a Chamber this small, still do not use face coverings.
No, I will not. The libertarian right have had enough of a kick at the ball in this debate. [Interruption.] No, I will not give way. At least half of those who have spoken today are not wearing face coverings.
I know that it is customary at this point to thank the Member who secured the debate but, in a break from tradition, I will start by thanking the scientists––the analysts, the medical professionals, the health experts, the clinicians and everyone else who stopped what they were doing two years ago and dedicated their lives to trying to work out and predict where the global pandemic might go and the impact that it could have on us. Two years ago, when tasked with working out this brand-new virus, every step that they took was a step into the unknown. There was no textbook to chart the route of this pandemic and every decision that they took was a new decision. They knew that every piece of advice they gave could have serious consequences for the population. The pressure of doing real-time covid-19 analysis must have been enormous. I, for one, really appreciate that scientists erred on the side of caution in the midst of a global pandemic in which tens of thousands of people were dying when there were no vaccines or booster protection. To all the SAGE officials, scientists, medical staff and public health experts who have done a remarkable job in keeping us safe, I say a huge and unequivocal thank you.
We know and can accept that forecasting and modelling during a pandemic are not an exact science but based on the best available evidence and a series of scenarios, presented from the best to the worst case. As Professor Adam Kucharski of the London School of Hygiene and Tropical Medicine said,
“a model is a structured way of thinking about the dynamics of an epidemic. It allows us to take the knowledge we have, make some plausible assumptions based on that knowledge, then look at the logical implications of those assumptions.”
As the much-maligned Professor Ferguson told the Science and Technology Committee,
“Models can only be as reliable as the data that is feeding into them.”
Of course such models have their limitations. They are not forecasting modelling but mathematical projections based on the data available to modellers. If the tests are not being done, or tests are not being registered as positive, for example, the data modelling and forecasting can be affected. It is important to remember, however, that while Bob Seely was telling anyone who would listen that modelling predictions were a national scandal, Professor Chris Whitty was telling the Science and Technology Committee that
“a lot of the advice that I have given is not based on significant forward modelling. It is based on what has happened and what is observable.”
Advice on lockdown and other public health measures was given by SAGE and others on the basis of observable data, not on forecasting modelling alone. I put it to the hon. Member for Isle of Wight that he was quite wrong when he told GB News that
“So much of what’s happened since with…inhuman conditions that many of us struggled with” was
“built on some really questionable science.”
Professor Whitty said clearly that he did not base his advice on that; rather, he based it on what he could see around him.
The primary purpose of modelling is simply to offer a sense of the impact of different restrictions. A report by researchers for the journal Nature found that the first lockdown saved up to 3 million lives in Europe, including 470,000 in the UK. The success of disease modelling was in predicting how many deaths there would have been if lockdown had not happened. SAGE officials, scientists and medical staff have done a remarkable job to keep us all safe, and many people across these islands owe their lives to them. I believe that the work that those people have done under enormous pressure should be applauded and appreciated, not undermined by the far-right libertarian Tories we have today.
On a point of order, Sir Edward. I think that the hon. Member for Isle of Wight has twice called me a fool. Can you clarify whether that is parliamentary language?
Thank you, Sir Edward; it is a pleasure to serve under your chairship. I congratulate—I think—Bob Seely on securing the debate, because I welcome impartial and honest interrogation of the science, as well as decisions made over the last two years that have been important for our country. I also welcome extreme scepticism about some of the decisions made by the Government. This debate has not been an honest and independent inquiry into the science, however. It clearly comes with an ideological bent, so it has to be taken in that light.
I also begin by paying tribute to our public servants and Government scientists.
The hon. Lady said that we have made points that require an ideological bent. I invite her to look at what I said and identify at least three points that required any kind of ideological justification. Contrary to the point made by Brendan O'Hara, nothing that I said required libertarian political philosophy.
That was another speech. I have never been in a room with so many software engineers who are also MPs. I begin by paying tribute to our public servants, our Government scientists, epidemiologists, and the scientific community who have worked tirelessly and put everything on the line to keep the public safe. That is what they have been trying to do over the past two years: keep people safe and save lives—and they have. They have shouldered the fear, anguish and hope of an entire nation that was experiencing deep trauma. They have, magnificently, been prepared to put their head on the block, if needs be. I hope the Minister will agree with me that it is very disappointing to hear them come under attack today from certain colleagues, despite everything that they have done.
I would remind those who seek to attack SAGE and our Government scientists that, while they were looking forward, planning and working hard on the evidence of what the virus might throw at us next, it was freshers week in Downing Street. They are not the enemy here. In fact, had a bit more attention been paid to their models, had there been more modelling before the start of the pandemic and had more action been taken in February and March 2020, thousands of lives could have been saved. It is not modelling that is the intrinsic problem here—it is decision making.
Modelling is a hugely important tool for managing epidemics that is tried and tested, with constant efforts to improve it. I agree with earlier comments that there should be more models; there should be models about the impact on mental health, education, poverty and models to learn from other countries in order to inform our decisions. As Graham Medley, one member of SAGE, explains very clearly, models are not predictions and are not meant to be seen as such; they are the “what ifs” that can be used by Governments to inform decisions and guide them as to what they might need to prepare for, which should include the worst-case scenarios—that is a crucial distinction. Accurate predictions cannot be made with such an unpredictable virus, when individual behaviour is also unpredictable, so models and scenarios are the best tools to give us the parameters for the decisions that will be made. As Graham Medley said, SPI-M—the Scientific Pandemic Influenza Group on Modelling—the sub-committee of SAGE that he chairs, produced
“scenarios to inform government decisions, not to inform public understanding directly. They are not meant to be predictions of what will actually happen and a range of scenarios are presented to policymakers with numerous caveats and uncertainties emphasised.”
Who would want it any other way?
My question to the sceptical Members present here today is: what is the alternative? We need to have those parameters. The alternative is guessing without parameters and knowledge.
I am going to move on. I do not want another speech from the hon. Member, given the time constraints. I am waiting for the Minister to answer my questions.
The Public Administration and Constitutional Affairs Committee also had problems with the communication of the modelling. It is there that I might have some common ground with the hon. Members who have spoken earlier. The Committee said in its report last March that communication has not always been transparent enough, and accountabilities have been unclear. I agree with this. If the time is not taken carefully to explain what modelling actually is to the public and media, and instead room is allowed for scenarios to be interpreted as predictions, inevitably the practice of modelling and forecasting will be rubbished and scoffed at and Government scientists blamed as doom-mongers. Not communicating the data and models properly creates more uncertainty and misery for small businesses, who have been asked enough as it is, as we saw over the Christmas period.
No. The PACAC report makes it clear that no one in Government has taken responsibility for communicating the data. The report states:
“Ministerial accountability for ensuring decisions are underpinned by data has not been clear. Ministers have passed responsibility between the Cabinet Office and Department of Health and Social Care,”.
That is why, as a member of the shadow Cabinet, I am responding to this debate. There are questions about the use and communication of the data.
I want to come to why we needed to rely on modelling and forecasting. Significant mistakes made throughout the last 10 years of Conservative government are the problem. There could have been much better information, and we could have been much better informed, if there had been better pandemic and emergency preparedness.
I agree, Sir Edward. Labour invested in pandemic planning in the Civil Contingencies Act 2004, but the Tory Government did not continue that investment. Operation Winter Willow in 2007 involved 5,000 people from all walks of life simulating a pandemic. The need for PPE, PPE training and wide-ranging social and economic disruption was identified. The Labour Government, then led by Gordon Brown, made heavy investments in pandemic planning.
Cut to 2016, Operation Cygnus made 26 key recommendations about PPE, urgent and drastic improvements to the NHS, and the likely impact on care homes. Most of that was ignored. PPE training stopped, stocks were run down—much of it left to go out of date—and there were no gowns, visors, swabs or body bags at all. The UK pandemic plan was mothballed and we were unready for the pandemic. No wonder we had to rely so much on modelling and forecasting.
We could have been much more ready. The Cabinet Office should have stepped up to enable cross-departmental organisation, and organisation with the devolved authorities based on plans, informed by the results of exercises and earlier modelling, but it did not. I hope that the Minister will echo that, distance herself from some of the earlier comments and criticism of our scientific community and respond to the points about pandemic planning and what we can learn.
Finally, I know that Bob Seely has asked for an inquiry into modelling. I welcome the inclusion of that in the covid inquiry. I hope that the Government will launch that inquiry. They have appointed a chair, but that chair is waiting for the powers she needs to begin getting evidence from scientists, software engineers and everyone she needs to hear from.
It is a pleasure to serve under your chairmanship, Sir Edward. I thank my hon. Friend Bob Seely for introducing today’s debate, and all hon. Members for their thoughtful and wide-ranging contributions.
Throughout the pandemic, we have been supported by world-leading scientists, epidemiologists and modellers, many of whom have worked around the clock, often without being paid for their contributions. During the fast-moving and uncertain pandemic, that support has been critical to ensuring that the Government have access to the latest and most reliable scientific advice.
The UK is very fortunate to have such strong academic expertise to drawn upon. Without the tireless work of our country’s scientists both within and outside SAGE, the UK would have been left in the dark at many key moments over the past two years. Scientific advice from disciplines ranging from immunology through to behavioural science have all played a role. Virologists have helped us to understand how different variants behave in the body, while clinicians have advised on higher risks to different patient groups. Similarly, epidemiology and infection disease modelling help us to understand the spread of covid-19 across the population, and the impact it might have.
It is important, however, to remember that such modelling is a tool to enable Ministers to make evidence-based decisions. Modelling provides a good way of understanding the range of possible futures that the pandemic might have in store for us; a good way of identifying what will determine which of those future we could face; and a good way of exploring how different policies, rules and guidance could determine which of those futures we could face.
I am conscious that I need to leave time at the end, but I will endeavour to get through my speech and take interventions.
It is not, however, and never can be, a crystal ball, regardless of who is doing the modelling. Models cannot perfectly predict the future, and modellers would not claim they do so. Contrary to how they may be presented in the media, modelling outputs are not forecasts, nor do they focus only on the most pessimistic outcomes. Model advice to Government is not simply a single line on a graph.
There is always uncertainty when looking into the future: uncertainty from potential policy changes, the emergence of new variants, or people’s behaviour and mixing and the changes that that brings. Central to modelling advice is an assessment of this uncertainty, what factors drive the uncertainty and how the results might change if the model’s inputs and assumptions change as new evidence emerges. As such, the modellers look at a wide range of possibilities and assumptions in order to advise policy makers on principles, not to attempt to say exactly what will happen..
I am grateful to the Minister for giving way. She heard what I said about my conversation with the Prime Minister—it is, of course, a true account of what happened. The reality is that the Prime Minister was shown a terrifying model that subsequently proved to be wildly incorrect, but he took away freedoms from tens of millions of people on that basis. The Minister must surely agree that that does not accord with the very sensible words that she is saying. That is not what actually happened. The Prime Minister was bounced on the basis of profoundly wrong models.
I appreciate the point my hon. Friend is making, and I will come to a point that shows that models are just models; they are not predictions. Yes, they are sometimes proved wrong, but that is for different reasons. It could be that people change their behaviour as a result of the information that they get.
One example that I was about to come on to is a model in December that considered a range of assumptions for omicron’s intrinsic severity, ranging from between 10% and 100% of delta’s, in addition to the additional reductions in severity that vaccines and prior infection provide. Fortunately, we now know that severity is not at the upper end of this range, and models have been updated to reflect the evidence as it emerges. It is quite right that as new evidence emerges, models are adjusted to take that into consideration.
My hon. Friends the Members for Isle of Wight and for Penistone and Stocksbridge (Miriam Cates) raised concerns about the Government’s reliance on modelling advice, both more generally and from individual modelling groups. I want to reassure hon. Members that encouraging a diverse range of opinions, views and interpretations of the data is all part of the process. SPI-M-O and SAGE do not rely on just one model or group but look at advice from a number of independent and world-leading institutions. Robust scientific challenge has been vital to the quality of SAGE advice, with modelling papers regularly released online and the methodology and underlying assumptions clearly laid out for everyone to challenge and bring forward other evidence—it is all out in the open. Sir Patrick Vallance has said:
“No scientist would ever claim, in this fast-changing and unpredictable pandemic, to have a monopoly of wisdom on what happens next.”
As the chief medical officer has emphasised, hard data on what is actually happening to patients and to the population as a whole is an essential part of the advice given.
Modelling is a helpful tool, but it must be considered alongside what is happening to real people at home, in schools or in hospital beds. As SAGE has been so visible and transparent in its advice, some people may think that it is the only form of advice to the Government, but this is not the case. Modelling and other advice from SAGE has been invaluable during the Government’s response to covid-19, but it is only one of the many issues we consider. Modelling helps us to understand the possible risks from the spread of covid-19 but, ultimately, this needs to be balanced against other health, economic and societal impacts.
A number of hon. Members questioned the accuracy of modelling forecasts from SAGE. I would like to reiterate that such modelling outputs are scenarios, not predictions and forecasts. As such, comparisons between past scenarios and what happened in reality should be made with caution. Comparisons must be made on a like-for-like basis, and often they are not.
Yes, I will.
To be frank, what we are doing in many cases is comparing apples and pears. Nevertheless, past modelling has proved remarkably accurate in many cases.
In closing, I would like to take this opportunity to emphasise just how appreciative we should be, and are, to the scientists, academics and Government advisers for all their hard work over the last two years. It was fitting to see this rewarded in the new year’s honours list. Finally, I would like to thank hon. Members again for their participation in today’s debate and the opportunity to discuss the matter further.
I think with one exception that was a very good debate. We all agree that we need good science, and we all agree that scientists have power, like politicians. We have the right, in the public interest, to question these people. It was fascinating listening to some of my hon. Friends—I am not quite sure what they were saying, but it sounded amazing. I am also delighted to agree with Fleur Anderson that, as part of the inquiry, we need to look into the use of modelling, so that if mistakes have been made—with great respect to those who try to say it—we can learn from that experience, we do not make those mistakes again, and the modelling works for the public good, as all good science and all good policy should do.
Motion lapsed, and sitting adjourned without Question put (