Covid-19 was a wake-up call. The published national risk register had been inadequate. No pandemic other than flu was rated as a major threat. Covid was primarily a medical catastrophe but it cascaded into other sectors: to schools and, through its impact on supply chains, manufacturing. There needs to be more joined-up government thinking and firmer guidelines about who, regionally and centrally, has authority in emergencies.
It is welcome that the risk register has been improved. Especially welcome are the comprehensive Biological Security Strategy, published just last September, and the strengthening of the Biological and Toxin Weapons Convention at its 2022 review. The 100-day mission concept to have a vaccine within 100 days of identifying a threat was launched at the G7 in 2021 when the UK held the chair. Can the Minister provide an update on what has happened to follow that up? All these measures need to be global. The earlier a new virus can be identified, the greater the head start in responding before a global spread.
Importantly, pandemics not only spread faster and more globally than they did in the past but cause far worse societal breakdown. European villages in the 14th century continued to function even when the Black Death halved their populations. In contrast, societies today are vulnerable to serious unrest as soon as hospitals are overwhelmed, which could occur before the fatality rate is even 1%. That is why we need to contemplate a societal or ecological collapse that would be a truly global setback. Covid-19 is not the worst that could happen.
The origin of Covid-19 is controversial. A leakage from the Wuhan lab cannot be ruled out. Be that as it may, we cannot rule out future lab leakages. I recall, for example, that a foot and mouth outbreak in the UK was caused by a leakage from the Pirbright lab in 2007. There is surely a case for enhancing security and independent monitoring of the level 4 labs around the world that are researching these lethal pathogens and, more importantly, ensuring that experiments on lethal pathogens are not done in less secure labs.
Can we rule out a future release that is intentional rather than accidental? To be sure, Governments and even terrorist groups with specific aims will always be inhibited from releasing engineered pandemics because no one can predict where and how far they can spread. The real nightmare would be a deranged loner with biotech expertise who did not care who became infected, or how many.
In contrast to the elaborate, conspicuous equipment needed to create a nuclear weapon, which can feasibly be monitored by international inspectors, biotech involves small-scale, dual-use technology that will become widely accessible. There are thousands of academic and industrial labs around the world where dangerous pathogens are being studied and modified. An increasing number of individuals will acquire the requisite expertise. The dangers are looming even larger. Regulation of biotech is needed ever more today.
However, what is really scary are doubts about global enforcement. Could the regulations be enforced throughout the world any more effectively than drug or tax laws can? Whatever can be done may be done by someone, somewhere. This is the stuff of nightmares.
The rising empowerment of malign, tech-savvy groups, or even individuals, by biotech will pose an intractable challenge to Governments and aggravate the tension between freedom, privacy, and security. The world is unprepared for the moral and practical challenges posed by burgeoning biotechnology in general. These scenarios call for clear thinking and well-crafted policies that recognise both biotech’s stupendous potential for human flourishing and its huge potential risk to our safety—indeed, to humanity itself.
We must hope that vaccines and antidotes become ever more effective and speedily produced, in step with the growing threat, and that the UK can indeed achieve influence in what has to be a global programme.
]]>In our schools, attainment levels are poor compared to nations in the Far East and northern Europe. In particular, there are too few good science teachers. Young children display enthusiasm and curiosity, often focused on dinosaurs and the cosmos—blazingly irrelevant to their lives, but fascinating—but they are starved of the inspirational teaching that could channel this enthusiasm.
There are three things that can be done. First, we should ensure that conditions are good enough to retain excellent schoolteachers and that their pay level is appropriate for practitioners of a serious profession. Secondly, we should encourage mature individuals to move into teaching from a career in, for instance, research, industry or the Armed Forces. Thirdly, we should make optimum use of the web to supplement and individualise what the teacher can do.
At university level, our international rankings are higher but there is a systemic weakness. The missions of our universities are not sufficiently varied. They all aspire to rise in the same league table, one that gives more weight to research than to what matters to potential students. What should worry us in particular at the moment is the financial pressure that current students are under: the fact that they cannot find affordable accommodation near campus and need to do time-consuming part-time work to support themselves. Universities and the public should expect a full-time commitment from those enrolled on three-year degree courses, but that requires that they are properly supported.
Indeed, there may well be a shift away from full-time three-year degrees. Everyone should have the opportunity to re-enter university or technical education, maybe part-time or online, at any stage in their lives. This path could become smoother, even routine, if there were a formalised system of transferable credits across the whole system of further and higher education. The Government’s lifelong entitlement to support, to be taken à la carte at any stage in life, is a good step forward.
Another problem is that the post-16 school curriculum is too narrow. An especial downside is that those who have been turned off science drop it at 16 and thereby foreclose the chance to qualify at 18 for high-quality university courses, so we should welcome the broad support for some kind of British baccalaureate.
It is shameful that we in this country are losing the professionals to staff ourselves and high-level technical expertise, so we should listen to the noble Lord, Lord Baker, and strengthen high-level technical education. Degree programmes should be valued by the graduates and geared to job prospects, that is true, but that need not necessarily be related to the salaries. To give one example, if a fine arts degree gives a gifted and committed artist the expertise to follow their avocation, even if their earning is just a living wage, that is surely an outcome to be welcomed.
Of course research is a distinctive activity in most universities, but the encroachment of audit culture and other pressures is rendering our universities less propitious environments for research projects that demand intense and sustained effort. Dedicated stand-alone labs may become preferable, although there is a downside in so far as they reduce contact between talented researchers and students. Indeed, the UK owes its strength in biomedical sciences to its famous labs that allow full-time long-term research, with government funding massively supplemented by the Wellcome Trust, cancer charities, and a strong pharmaceutical industry.
There is a serious concern that academia itself is becoming less alluring as a career. Some people will become academics come what may—the nerdish element, of which I guess I myself am one—but a world-class university system cannot survive just on those. It must attract to its faculty a share of young people who are savvy about their options and ambitious to achieve something distinctive by their 30s. These people increasingly associate academia with years of precarity and undue financial sacrifice. Indeed, the declared rationale for setting up ARIA was to get round the problem by fostering long-term blue-skies research and freedom from bureaucracy in a fashion not available elsewhere in the system. That is fine, but surely it would have been a far higher priority to render less vexatious the bureaucracy of UKRI, whose budget is 25 times higher than what is envisaged for ARIA.
The effective exploitation of new discoveries is an imperative. Universities and research institutes must be complemented by organisations, in the public or private sector, that can offer adequate development and manufacturing capability. This concatenation certainly proved its worth in the recent pandemic. It is likewise imperative that the UK should foster expertise not only in the biological sciences but in energy, climate and the cybersphere—indeed, in all the fields needed to tackle global challenges. We have traditionally suffered from a lack of venture capital to bring things to market, but I worry that our ability to attract and retain mobile academic talent—students and professionals—is now at risk. We have been fortunate with regard to ESO, but it is an unwelcoming deterrent that, as has been mentioned, someone with a family who wants a global talent visa has to fork out more than £20,000.
I shall mention an enlightened recent contribution to these debates, a report co-authored by Tony Blair and the noble Lord, Lord Hague. They call for the creation of a science and tech policy and delivery unit that is
“independent from vested interests and status-quo forces, and able to devise, drive and unblock a reform agenda”.
That is needed, they say, to end the situation whereby
“the Treasury strongly micromanages science and technology spending and is the de-facto controller of the UK’s national R&D strategy”.
Their report advocates measures to reduce the level of audit imposed on universities and argues for the reform of technology transfer offices to encourage more university spin-offs. They say that UKRI should be restructured and there should be new hubs for regional development and
“a network of research institutes tasked with securing our lead in established competitive areas like synthetic biology and AI”.
I thought it was appropriate to listen to those two dormant ex-politicians on a day when many are celebrating the recycling of another of their number as the Foreign Secretary.
]]>However, the societal implications are ambivalent here already. In particular, how can humans remain in the loop? If we are sentenced to a term in prison, recommended for surgery or even given a poor credit rating, we would expect the reasons to be accessible to us and contestable by us. If such decisions were entirely delegated to an algorithm, we would be entitled to feel uneasy, even if presented with compelling evidence that, on average, the machines make better decisions than the humans they have usurped.
AI systems will become more intrusive and pervasive. Records of our movements, health and financial transactions are in “the cloud”, managed by multinational quasi-monopolies. The data may be used for benign reasons—for instance, medical research—but its availability to internet companies is already shifting the balance of power from governments to globe-spanning conglomerates.
Clearly, robots will take over much of manufacturing and retail distribution. They can supplement, if not replace, many white-collar jobs: accountancy, computer coding, medical diagnostics and even surgery. Indeed, I think the advent of ChatGPT renders legal work especially vulnerable. The vast but self-contained volumes of legal literature can all be digested by a machine. In contrast, some skilled service-sector jobs—plumbing and gardening, for instance—require non-routine interactions with the external world and will be among the hardest to automate.
The digital revolution generates enormous wealth for innovators and global companies, but preserving a humane society will surely require redistribution of that wealth. The revenue thereby raised should ideally be hypothecated to vastly enhance the number and status of those who care for the old, the young and the sick. There are currently far too few of these, and they are poorly paid, inadequately esteemed and insecure in their positions. However, these caring jobs are worthy of real human beings and are far more fulfilling than the jobs in call centres or Amazon warehouses which AI can usurp. That kind of redeployment would be win-win. However, AI raises deep anxieties; even in the short-term, ChatGPT’s successors will surely confront us, writ large, with the downsides of existing social media: fake news, photos and videos, unmoderated extremist diatribes, and so forth.
Excited headlines this year have quoted some experts talking about “human extinction”. This may be scaremongering, but the misuse or malfunction of AI is certainly a potential societal threat on the scale of a pandemic. My concern is not so much the science-fiction scenario of a “takeover” by superintelligence as the risk that we will become dependent on interconnected networks whose failure—leading to disruption of the electricity grid, GPS or the internet—could cause societal breakdowns that cascade globally.
Regulation is needed. Innovative algorithms need to be thoroughly tested before wide deployment, by analogy with the rigorous testing of drugs which precedes government approval and release. But regulation is a special challenge in a sector of the economy dominated by a few multinational conglomerates. Just as they can move between jurisdictions to evade fair taxation, so they could evade regulations of AI. How best can the UK help to set up an enforceable regulatory system with global range? It is good news that the Government are tackling this challenge already.
Finally, society will be surely transformed by autonomous robots, even though the jury is out on whether they will be “idiot savants” or will display wide-ranging superhuman intelligence—and whether, if we are overdependent on them, we should worry more about breakdowns and bugs or about being outsmarted, more about maverick artificial intelligence than about real stupidity.
]]>University campuses were silent and deserted during the peak of Covid-19. Two cohorts of students had a really rotten experience. Life has been gradually restored, but nobody expects full reversion to the old normal—nor should we wish for it. Lessons learned in the crisis should energise and accelerate some much-needed reforms of the whole post-18 education sector.
Most students are of course between 18 and 21, undergoing three or four years of full-time, generally residential education and studying a curriculum that is too narrow, even for the minority who aspire to professional or academic careers. This basic structure has prevailed since the 19th century, but universities have vastly expanded and now encompass about 50% of young people.
Post-18 education needs to be much more flexible and open, as fast-changing lifestyles offer new opportunities for both work and leisure, and technology offers new channels and opportunities. The system should offer everyone the opportunity to enter or re-enter, maybe part-time or online, at any stage in their lives. This path could become smoother, indeed routine, if there is a system of credits and modules that is respected and recognised across the whole system of further and higher education, thereby allowing transfers. Many will still pursue a traditional undergraduate course, using up their entitlement all in one go, but it is a real plus if they can instead choose to use the LLE à la carte—year by year or by a succession of modules at any stage in life.
Students who embark on a degree course but realise that it is not right for them or who have personal hardship should be enabled to leave early with dignity, with credits that formally record what they have accomplished. They should not be disparaged as wastage: they should make the positive claim that “I had two years of college and have an entitlement to return and upgrade later”. Indeed, the overwhelming focus on a degree needs revision. There is nothing magic about the attainment threshold that is reached after three or four years.
Another thing is that it would improve social mobility if universities, such as my own, whose entry bar is dauntingly high were to reserve a fraction of their places for students who do not come directly from school. They could thereby offer a second chance to those who were disadvantaged at 18 but have caught up by earning two years’ worth of credits at other institutions or online. Such students could then advance to degree level in two further years.
It is a sad fact that the worst educational inequalities are imprinted earlier in life in the pre-school years and during school education. It will be a long slog to ensure that high-quality teaching at school is available across the full geographical and social spectrum. However, promoting lifelong and part-time learning, with flexible assessment, would go some way to offering more support to those whose deprivations start in infancy and lead to barriers that become harder to surmount and to exclusions that offer no second chances.
What about the courses themselves? There is now, post pandemic, more experience of online and remote teaching. We can learn especially from institutions that had already spearheaded innovations pre pandemic, above all the Open University, and let us not forget Arizona State University in the US. We must hope, incidentally, that there is a sympathetic government response to the Open University’s well-based concerns that current proposals do not offer support to mature learners based a substantial distance away.
Purely online courses, the so-called MOOCs, have had an ambivalent reception. As stand-alone courses without complementary contacts with a real tutor, they are probably satisfactory only for level 7 vocational courses aimed at motivated mature learners studying part time. These courses should be eligible for support, but there will surely be a demand for vocational courses to develop skills at levels 4 and 5. These would open up an expanded role for new providers, many of them online, that do not possess the infrastructure of a regional college. There would then of course be a crucial need to ensure quality control via Ofqual. Indeed, it might be optimal for these courses to be overseen on a national scale by relevant professional organisations.
Accreditation and assessment of individual students is going to be challenge, and perhaps the Minister will say how this will be addressed. It is a challenge especially because traditional continuous assessment in non-practical subjects has been scuppered by the advent of ChatGPT and its successors. It should be possible for a student to be tested by some kind of examination board without having followed any particular course, rather as you can now take an A-level wherever or however you have been taught.
Although we must prioritise the case for the relevant skills and the economic situation in the UK, let us not focus too much on them. We heard about STEM, but we must also have STEAM, where A stands for the arts. Let us also not focus too much on the earnings boost engendered by courses. For instance, if advanced study enables a creative artist to become proficient enough to make a living by following his or her avocation, that is surely valuable even if they barely earn a living wage.
Finally, let us hope that the lifelong learning initiative does indeed promote what it aims to do, and that universities and other bodies are incentivised to release content. They should release content—excellent lectures, for instance—that are not just part of a course but can be watched free online in this country and around the world by those seeking education for its own sake and not for vocational reasons. In a society with vast technological change, the aims should be to widen people’s horizons and spread knowledge of UK culture, so that the life chances of young people are not constrained by what they have achieved or failed to achieve by the age of 21.
]]>We are increasingly reliant on vulnerable globe-spanning networks for food supply and manufacturing, and novel viruses more virulent than Covid-19, perhaps even artificially engineered, could emerge at any time and spread with devastating speed. Our interconnected society is ever more vulnerable to other scenarios—massive cyberattacks, cascading failures of crucial infrastructure, or even accidental nuclear war—whose likelihood and impacts are rising year by year. Covid-19 must be a wake-up call, reminding us that we are vulnerable. Such worries cannot now be dismissed as flaky doom-mongering.
What does it take to enhance the UK’s preparedness for future threats? The first need is better joined-up government. Covid was primarily a medical catastrophe, but it cascaded into other sectors, including schools and, through its impact on supply chains, manufacturing. We have learned lessons about the trade-off between efficiency and resilience. For instance, there need to be firmer guidelines about who—regionally as well as centrally—has authority in emergencies.
Secondly, we need to optimise the use of limited resources in preparing precautionary measures. For that, we need a more rigorous assessment of what scenarios are most probable. As has been said, the published risk register has hitherto been inadequate. There is little input from external experts and too much secrecy, and no pandemic other than flu was rated a major threat. Moreover, the quoted likelihoods pertain to the next two years, but that is not enough when the threats may be rising year on year, as they surely are for engineered pandemics and massive cyberattacks. We need to plan maybe 20 years ahead.
As we have heard, the Government’s recently announced national resilience framework is welcome. It proposes a new institutional architecture to raise the profile of resilience within government and Parliament, with, as we recommended, a head of resilience equal in rank to the National Security Adviser; an annual parliamentary statement on resilience; a new national resilience academy to train up a new generation of risk-management professionals across relevant sectors; and a national exercising programme, embracing both military-style and virtual reality exercises to test our resilience to a range of risks. This measure was, incidentally, forcefully advocated by the two former Defence Secretaries we were lucky to have on our committee.
The credibility, acumen and perseverance of the first person appointed as head of resilience will be a crucial determinant of where the scheme as a whole ends up by fostering practical and effective action of the kind that our committee recommends. Also crucial is whether the Chancellor signs up to spending whatever sums of money—probably quite modest—are needed to implement the framework’s proposals. Given these prerequisites, we would be on the verge of making real progress.
However, cross-party consensus on the institutional framework is essential if we are to properly address measures that stretch far beyond the timescales of a single Administration. A good start, already signalled by the shadow Paymaster-General, would be a manifesto commitment to nominate a Cabinet-level Minister with full-time responsibility for resilience. Moreover, the Opposition could add a series of substantive points not fully covered in the framework—in particular, establishing a statutory, independent resilience institute on the model of the Climate Change Committee or the Office for Budget Responsibility that can report to Parliament on the reality or unreality of the claims for resilience being made by relevant Ministers. That again was recommended by our committee. The UK should lead campaigning for the international co-operation that is needed to minimise the extreme threats, which are global—as most are.
If the Government vigorously implement their new framework, and the Opposition push more vigorously in these directions, then our democracy will be working as it should to protect society from catastrophe.
]]>We should welcome some savings—for instance, tightening of terms of procurement contracts and cutting the number of consultants—but most cuts are far too drastic to be absorbed by efficiency savings. Indeed, they add to costs. The gross inequalities in our society and the poverty and insecurity suffered by the sick, the old and the low paid have of course been aggravated by two events beyond our Government’s control: the Covid pandemic and the fallout from Ukraine. But the impact has been worsened by the Government’s policies; in particular, their reluctance to raise taxes.
We have learned from recent crises that there is a trade-off between efficiency and resilience. I have two examples: first, dependence on long supply chains, allied with just-in-time delivery, can be a false economy if large-scale manufacturing is jeopardised when one link in the chain breaks; and, secondly, although it may be efficient to have 95% utilisation of intensive care beds in hospital, it is prudent to bear the cost of spare capacity to cope with emergencies. It is unrealistic to claim that crises in our schools and hospitals can be solved by efficiency savings alone. These institutions are forced to pinch and scrape to make savings, which can lead to reduced efficiency because of decaying infrastructure, outdated IT, falls in staffing and staff morale, and so on. Our expenditure and outcomes have fallen below those of other advanced countries, a contrast starkly spelled out, incidentally, in a coruscating article in the latest Economist.
I had the privilege of being on the Times Education Commission, the subject of a recent debate in this house instigated by the noble Lord, Lord Lexden. An especially moving section of its excellent report highlighted the problems at preschool level. A head teacher of a northern primary school recounted that many children in reception classes could not say their name and were not toilet trained. This was a consequence of the shutdown. Home schooling was a reality for children with educated and well-resourced parents, but absolutely not for children of disadvantaged and insecure parents. Even before Covid, this contrast had grown starker because of the closure of around 1,000 Sure Start centres. It will be hard for these kids to catch up after facing such deprivations at the beginning of life. For them, equal opportunity is a sham.
At the end of life too conditions for the disadvantaged are shamefully aggravated by austerity. There is an almost decade-scale gap in life expectancy between the rich and the poor. We all know that it is the underfunding of care homes, distressing for the old and sick, that leads to the overwhelming of hospitals that endangers all of us.
These inadequacies cannot be cured by efficiency gains. The predicament that we are in surely calls for a rise in some taxes—for instance, on multinationals, on six-figure salaries, and on dividends and capital gains. Our nation should emulate the US less and northern Europe more, to sustain public services that we can be proud of and which allow the rising generation to fulfil their potential in a more secure and equal society.
]]>Having said that, I will focus my comments on post-16 and university-level education, declaring an interest as a member of Cambridge University. After two years of Covid-induced disruption, we cannot expect full reversion to the old normal, nor should we wish for it. The upheaval should energise reforms to the whole post-18 education sector, which needs more institutional variety and more flexibility in its offerings. There is now more experience of online and remote teaching. We can make a more realistic assessment of the most effective use of contact hours for students. We can also learn from institutions that had already spearheaded that transition pre pandemic; for instance, the fast-expanding Arizona State University.
In more traditional universities, the basic lectures on core topics are given live to audiences of 200-plus. There is no real feedback or discussion during these lectures, although, at least in the better institutions, they are supplemented by smaller classes and tutorial groups. Little would be lost if those big lectures were videoed rather than live. Indeed, they could then be more carefully prepared and achieve higher quality. Moreover, not only could they be watched more than once by the primary student audience; they could be made available globally. There have been successful precedents at MIT and Stanford, and scholars such as Harvard’s Michael Sandel have become international stars.
Universities in the UK should either set up platforms themselves or offer their best material as content for the Open University so that it gets wide dissemination. Whereas the overseas campuses set up by some western universities, mainly in Asia, have been rightly criticised as diluting the brand, the wider viewing of good lectures, even if not part of a course offering online credits, should be reputationally positive for the lecturer’s university in the same way as a widely used textbook authored by a faculty member would be. However, what is needed is that students should be able to choose their preferred balance between online and residential courses and to access distance learning of high quality. We need more facilities for part-time study and lifelong learning, and a blurring of the damaging divide between technical and university education.
Incidentally, purely online courses—the so-called MOOCs—have an ambivalent reputation. As stand-alone courses without complementary contact with a real tutor, they are probably satisfactory only for master’s-level vocational courses intended for motivated mature learners studying part-time, but they can have wider benefits as part of a package that incorporates live tutoring as well.
We need to do more than just incorporate virtual activities into the existing framework, though. The higher and further education system needs much more drastic restructuring. Universities all aspire to rise in the same league table, which gives undue weight to research over teaching. Most of their students are between 18 and 21, undergoing three years of full-time, usually residential, education and studying a curriculum that is too narrow even for the minority who aspire to professional academic careers.
Even worse, the school curriculum is too narrow as well, as we have heard. The long-running campaign for an international baccalaureate-style curriculum for 16 to 18 year-olds deserves to succeed but it needs co-operation from universities, whose entrance requirements now overtly disfavour applicants who straddle science and humanities in their A-levels.
We should abandon the view that the standard three-year full-time degree is the minimum worthwhile goal, or indeed the most appropriate one for many students. The core courses offered by the first two years of university education are often the most valuable, both intellectually and vocationally. Moreover, students who realise that the degree course they have embarked on is not right for them or who have personal hardship should be enabled to leave early with dignity, with a certificate to mark what they have accomplished. They should not be disparaged as wastage. They should make the positive claim, as many Americans would, that “I’ve had two years of college”, while those running universities should not be berated for taking risks in admission and should certainly not be pressured to entice students to stay, least of all by lowering degree standards.
More importantly, everyone should have the opportunity to re-enter higher education, maybe part-time or online, at any stage in their lives. This path could become smoother—indeed, routine—if there were a formalised system of transferable credits across the whole system of further and higher education, as urged in the Augar report. We should strive for a flexible grant or loan system offering entitlement to three years’ support, to be taken à la carte at any stage in life. This would mean, for instance, that those who leave university for any reason after two years are not tainted as wastage, but can get some certificate of credit and an entitlement to return and upgrade later.
Admission to the most demanding and attractive courses is naturally competitive and always will be, but what is not acceptable is that the playing field is still far from level. The killer fact, and the most intractable challenge for the access agenda, is that maybe half of the UK’s young people do not receive the quality of teaching at school that allows them a fair prospect of qualifying for the most competitive university courses. Even those who have been at good schools will be handicapped if, because of the specialisation enforced by A-levels, they unwittingly drop science at 16, for instance, and later realise that they wish to pursue it.
It will be a long slog to ensure that high-quality teaching at school is available across the full geographical and social spectrum, and it may be impossible without a narrowing of the gulf between the resources of the private fee-taking schools and those in the state system. In the meantime, it would send an encouraging signal if the UK universities whose entry bar is dauntingly high, such as Oxbridge, were to reserve a fraction of their places for students who do not come straight from school. They could thereby offer a second chance to those who were disadvantaged at 18 but have caught up by earning two years’ worth of credits at other institutions or online, for instance via the Open University. Such students could then advance to degree level, even on the more challenging courses, in maybe two further years.
Let us hope that the recent crisis catalyses constructive innovations in higher education. This sector is currently one of the UK’s distinctive strengths and crucial to our future, but it must not be sclerotic and unresponsive to changes in needs, lifestyle and opportunities. A rethink is overdue if the UK is to sustain its status in a different world. The Times commission’s report sets the wider context, and it should be welcomed by all those in higher and further education.
]]>Not long ago, we had the major reorganisation of science funding that led to UKRI, introducing a layer of administration above the established research councils, such as the MRC. We have also had Innovate UK, and this year two high-level advisory bodies have been set up to oversee all this, adding yet another layer to the hierarchy. Surely we should be cautious about establishing another entity before these changes are bedded in and prove their worth. As the Minister said, 50 times more funds are spent on existing institutions than are envisaged for ARIA. The priority should surely be to ensure the maximum efficiency and minimal bureaucratic problems in these other organisations.
Confidence and high morale drive creativity, innovation and risk-taking. This is true in blue-skies science and equally true in the often greater challenges of the development of new products or businesses. A motive for ARIA is the perception that existing institutions cannot offer this, but the best institutions still do—I am lucky to work in one. But even in these privileged environments, there are dark problems ahead. My younger colleagues seem even more preoccupied with grant cuts, proposal writing, job security and suchlike. Prospects of breakthroughs will plummet if such concerns prey unduly on the minds of even the best young researchers. Worse still, the profession will not then attract the most ambitious talent from the next generation, nor draw in foreign talents. Many of us worry that the UK’s traditional strengths are consequently in jeopardy.
However, these negative perceptions can be reversed. I will mention two specific gripes that can be addressed. The first is that bodies that allocate public funds focus on ever more detailed performance indicators to quantify the output. This has the best of intentions, but its actual consequences are often the reverse: to constrain long-term thinking and prevent even a minority from having the privilege of fully focusing on long-term problems. The second bugbear is the REF, which is not only burdensome for universities but offering perverse incentives to researchers that discourage risk-taking.
The difference in pay-off between the very best research and the merely good is, by any realistic measure, hundreds of per cent. What is crucial in giving taxpayers enhanced value for money is maximising the chance of the big breakthroughs by backing the judgment of those with the best credentials and supporting them appropriately. Research universities do this and should be cherished. They benefit the nation through direct knowledge transfer from their labs to industry and through the quality of the students they feed into all walks of life. Moreover, high-profile academics can seize on a promising idea from anywhere in the world and run with it. Let us not forget that, despite the UK’s strength, at least 90% of the best ideas come from the rest of the world.
Despite these strengths, our universities are not always the most propitious environments for projects that demand intense and sustained effort. Dedicated laboratories such as the LMB are, in some contexts, preferable. Indeed, our national strength in biomedical sciences stems from the existence of laboratories allowing full-time long-term research, which is getting ever harder in today’s universities. Moreover, UK government funding is massively supplemented by the Wellcome Trust, the cancer charities and a strong pharmaceutical industry. To ensure effective exploitation of new discoveries, research institutions must be complemented by organisations, whether in the public or private sector, that can offer adequate manufacturing capability when needed. This fortunate concatenation certainly proved its worth in the recent pandemic. Government and private laboratories are crucial in health, plant science and energy. We may need more of them, and also more innovative ways perhaps of ensuring that IP generated here is optimally exploited.
However, given this complex ecology, do we need an ARIA organisation to achieve ARIA’s aims? This does not seem clear. ARIA’s proponents think that UKRI’s bureaucratic features are chronic—that we must be fatalistic about this and offer a lucky few the chance to bypass it. Indeed, UKRI has a very broad mission and is working hard to reduce bureaucracy, but much of it is imposed by government regulations. Can the Minister tell us why there could not be within UKRI a separate fund for supporting some projects in the ARIA style via a ring-fenced part of its budget that was less constrained by Cabinet Office and Treasury controls, which slow things up and constrain experimentation in funding allocation mechanisms? Could the Industrial Strategy Challenge Fund, a pan-UKRI programme, also achieve some of ARIA’s goals if bureaucratic constraints on it were loosened?
Finally, retaining our scientific standing is crucial. The UK will decline economically unless it can ensure that some of the key creative ideas of the 21st century germinate here and, even more, are exploited here. Unless we get smarter, we will get poorer.
]]>Viewed in this context, the Bill is incremental and circumscribed. It allows those of sound mind with a terminal prognosis to end their lives in a time and place of their choosing, rather than suffer a lingering decline marked by pain and loss of autonomy. That is why some make a one-way trip to Switzerland and why the lives of loved ones are sometimes ended in ways that are strictly illegal. These acts may not result in prosecution, but a shadow of criminality hangs over them and adds to the grief of those whose motive is compassion.
It is a misperception that support for the Bill betokens less admiration for the hospice movement or less motive to enhance palliative care. Likewise, it is a misperception that disabled and vulnerable individuals are less supportive of the Bill than the public at large. My late colleague Stephen Hawking thought that assisted dying would be wrong unless one were in great pain. Thankfully, his own last days were peaceful, but he thought none the less that the disabled should have the option.
We have heard widespread concerns that the vulnerable would be pressured to opt for assisted dying so as not to be a burden—a compelling case—but it is worth mentioning a counterargument. When the great Baroness Mary Warnock spoke in a debate in this House in 2014, she offered a countervailing view:
“All the way through their life until this point, putting their family first will have been counted a virtue, and then suddenly, when they most want to avoid the trouble … sorrow and misery of disruption to their family, they are told they are not allowed to follow that motive.”—[Official Report, 7/11/14; col. 1908.]
She found this “extraordinarily puzzling”.
Baroness Warnock’s robust stancewould resonate with a few of the Bill’s supporters, but we all welcome the Bill from the noble Baroness, Lady Meacher, because it would surely give great comfort to far more of us than would actually use its provisions.
]]>Plainly, many things are utterly unpredictable a century ahead but environmental, population and climatic scenarios can be analysed. It may be prudent to pay an insurance premium today, as it were, to guard against global threats that could emerge a century hence. Expert assessment of these issues is surely an endeavour that should be expanded, and it deserves all-party support.
We should also scrutinise our built environment. Our grand public buildings, such as the one we are in now, the great churches, museums and monuments, and even our railway stations, date from the Victorian era or earlier. They were built to last; not so the tower blocks that dominate the skyline today. Their planned lifetime is typically only 50 years, and they are not a legacy that future generations will thank us for.
I conclude with a cameo. Ely Cathedral is near where both the noble Lord, Lord Bird, and I live. It overwhelms us today, so think of its impact 800 years ago and the vast enterprise that its construction entailed. Most of its builders had never travelled more than 50 miles; the Fens were their world. Even the most educated knew of nothing beyond Europe. They thought that the world was a few thousand years old, and that it might not last another thousand. However, despite these constricted horizons in both time and space, and the deprivation and harshness of their lives, they built this vast cathedral. Those who conceived it knew that they would not live to see it finished. Their legacy still elevates our spirits, nearly a millennium later.
What a contrast that is to today. Unlike our forebears, we know a great deal about our world. Technologies that our ancestors could not conceive of now enrich our lives and understanding. We know that we are the stewards of a “pale blue dot” in a vast cosmos, a planet with a future measured in billions of years, whose fate depends on humanity’s collective actions this century. However, all too often, our focus is short-term and parochial. We downplay what is happening even now in impoverished faraway countries and give too little thought to the world we leave for our grandchildren.
In today’s runaway world, we cannot aspire to leave a monument lasting 1,000 years, but it would surely be shameful if we persisted in policies that denied future generations a fair inheritance. We need more cathedral thinking and that is a signal that this Bill will send.
]]>We in the UK can realistically move to an economy that uses less energy, but the developing countries in Asia and Africa cannot reach what we consider acceptable living standards by 2050 without generating far more power than they do today. Not only must their per capita energy consumption rise, but they will by then harbour more than 1 billion more people. It is the CO2 emissions from these countries that matter more to the world’s climate, and indeed to us. We must hope that these countries’ growth will be far greener than Europe’s has been, and that they learn from our mistakes and follow the precepts of the UK FIRES report.
Unconstrained climate change, with the risk of tipping points leading to genuine catastrophe, is a threat to global security. Minimising this threat deserves the scale of sustained effort that we commit to our national defences. The urgency is appropriate to a national emergency.
We have a head start. For decades we have had the Culham laboratory for fusion research. The newly funded Faraday Centre to develop improved batteries is a welcome step. This should be the nucleus of a broader and larger venture encompassing other energy technologies—especially those where it is realistic for the UK to achieve a lead—and computational climate modelling.
If a scaled-up and wisely prioritised programme can give the UK a lead in more efficient and cheaper carbon-free generation, vast developing markets could afford to leap-frog directly to clean energy, rather than building, for instance, coal-fired power stations. Our efforts could thereby make far more than a 1% difference to the global effort to achieve net-zero carbon emissions.
The optimum structure and governance for accelerating our national effort—hubs of expertise to spearhead innovation and development—deserve serious discussion. We need institutions with long-term missions devoted to a national goal, crucial supplements to product-driven research in industry and journal-driven research in universities. Should these be free-standing national labs or beefed-up versions of the so-called catapults with mixed public/private funding? In any case, a modest fraction of funding should be reserved for blue-sky exploration of speculative ideas, probably best done in universities, which is the idea behind the fashionable ARPA model. How can we best ensure that there is take-up from UK industry so that we accrue long-term economic benefits if we pioneer important new technologies?
Unlike R&D in a defence arena, we are aiming here to combat a shared global threat, so we should forge co-operation and alliance with other nations, especially the developing countries for whom the threat is most severe and most intractable. How best this can be done is a severe political challenge that should surely be addressed.
A key mantra for the UK should be “If we don’t get smarter, we’ll get poorer.” With bold reforms to our education and measures to promote an innovation culture, we could contribute far more than our pro rata share to solving these global challenges. It would be hard to think of a more inspiring challenge for young engineers than to deploy UK expertise to provide clean energy for ourselves and for the developing world, nor a better investment in the UK’s future.
]]>We should abandon the view that a standard three-year residential degree is the minimum worthwhile goal. Students who realise that the course they embarked on is not right for them or who have personal hardship should be enabled to leave early with dignity, with a certificate to mark what they have accomplished. They should say, “I had two years of college.” They should not be disparaged as wastage. Universities should not be pressured to entice them to stay, least of all by lowering degree standards, but they should have the chance to come back later.
Some 18 year-olds of very high intellectual potential who have had poor schooling do not have a fair chance of admission at 18 to the most competitive universities. Even if they are given contextual offers, they may still struggle with the most demanding courses. That is why I urge that Oxbridge, and other universities whose entry bar is dauntingly high, should reserve a fraction of their places for students who do not come straight from school but have caught up despite their disadvantaged backgrounds through earning two years’ worth of credits online, at another institution or via the Open University. Such students could then advance to degree level in perhaps just two further years.
These reforms could be implemented routinely if the Government were to follow the Augar report and formalise some system of transferable credits across the whole higher and further education system. Moreover, another of that report’s welcome suggestions is that everyone should be entitled to a total of three years of support that can be taken à la carte, as it were, at any stage in life.
Finally, despite what I have said, the most intractable causes of inequality are imprinted before the age of five on those brought up in stress and poverty. These concerns should be at the top of our agenda.
]]>Climate change is potentially a threat to national security, so combating it deserves the scale of sustained effort that we commit to our national defences. This require large-scale, long-term, mission-driven efforts in institutions like those that we have for defence R&D. In the United States, two successive Energy Secretaries, both, amazingly, world-class physicists, advocated establishing new national laboratories to spearhead energy innovation, along the lines of Los Alamos. That is what we need here: institutions, with long-term missions, devoted to a national goal, crucial amplifiers of product-driven research in industry and journal-driven research in universities. For decades we have had the Culham laboratory for fusion research. The newly funded Faraday centre for battery development is welcome, but it should be the nucleus of a broader and larger venture to address other energy technologies, especially those where it is realistic for the UK to achieve a lead—and for computational modelling, too.
Real breakthroughs are needed in energy generation, storage and smart grids to meet the 2050 targets, but there is a stronger motivation. We produce only 1% of global CO2 emissions—itself not crucial—but we produce more than 10% of the world’s high-impact research. If a scaled-up and wisely prioritised programme led to cheaper carbon-free generation, India and other vast developing markets could leapfrog directly to clean energy rather than building coal-fired power stations. Our efforts could thereby make far more than a 1% difference to the world, and to our national economic benefit. It would be hard to conceive of a more inspiring challenge for young scientists and engineers or a better investment in the UK’s future than devising clean and economical energy systems for the world. Likewise, incidentally, we can contribute disproportionately to another global challenge, sustainable food production, if we expand and deploy our world-leading expertise in genetics and plant science.
This leads to my final point. Our idealistic younger generation need the requisite expertise, which is why it is good that the Government have responded to the Auger report’s recommendations about 16 to 19 year-olds’ further education. That report suggested reforms of higher education as well. To promote lifelong learning, it recommended that everyone should be entitled to three years’ support, to be taken at any stage. This would encourage flexibility and would mean, for instance, that those who leave university for any reason after two years are not tainted as wastage, but can get some certificate of credit and an entitlement to return and upgrade later in life. In his previous role, the Minister supported such reforms, so will the Government implement that part of the Augar report?
A key mantra for this country should be, “If we don’t get smarter, we’ll get poorer.” With bold reforms to our education, and our innovative approach to R&D, we could aspire to contribute far more than our pro-rata share to solving global challenges and enhance our economy as well.
]]>Energy from the sun flows through intricately structured ecosystems to give us our food, other natural resources, forests and other habitats. However, there is a spiritual value too. In the words of the great ecologist, EO Wilson:
“At the heart of the environmentalist worldview is the conviction that human physical and spiritual health depends … on the planet … Natural ecosystems—forests, coral reefs, marine blue waters—maintain the world … as we would wish it to be maintained”.
Our body and our mind evolved to live in this particular planetary environment and no other. These sentiments resonate with all conservationists. Of course, here in the UK there is widespread anxiety about the effects on wildlife of urbanisation, pesticides and so forth. There is understandably more focus on birds and cuddly mammals than on worms, insects and microfauna.
The UK is only 1% of the world’s population and an even smaller fraction of its land mass. None the less, we can have disproportionate leverage in promoting global sustainable development, which was defined in Brundtland’s classic 1987 report as meeting,
“the needs of the present”,
especially those of the poor,
“without compromising the ability of future generations to meet their own needs”.
Against this, we have the well-known World Wide Fund for Nature’s estimate that the world is consuming natural resources at about 1.7 times the sustainable level.
However, the global response to this concern is muted, partly because those in poor countries, who are at the sharp end of the impacts, understandably have shorter horizons of both space and time. The main impediment is, of course, that natural capital does not feature in national budgets. If a forest is cut down it should be recorded as a negative contribution to GNP. Incidentally, it is fortunate that the UK’s inputs to the UN biodiversity conference in China next year are being co-ordinated by my Cambridge colleague, Sir Partha Dasgupta, who is perhaps one of the real world leaders in environmental economics.
We need to preserve diverse ecosystems. They are more sustainable and more resilient. When conditions change some minority species with different traits might get an advantage. These other species are, as it were, waiting in the wings to take over if required to do so. Sparser ecosystems cannot respond so well to changing conditions. However, changes in climate and in land use can, in combination, induce sudden changes—tipping points that amplify each other and cause runaway changes. If humanity’s collective impact on nature pushes too hard, the resultant ecological shocks could irreversibly impoverish our biosphere.
Rising and more demanding populations are putting growing stresses on the entire biosphere. We have, of course, entered the new geological era called the Anthropocene. Biodiversity is threatened when land is built on, cultivated or overgrazed, and when large areas are subdivided. These concerns are aggravated if extra land for food production or biofuels encroaches on land left for natural forests.
To feed 9 billion people in 2050 while avoiding these threats will require further-improved agriculture that is low-till and water-conserving, and GM crops, together with better engineering to reduce waste, improve irrigation and so forth. However, there will also be limits on the amount of energy available and, in some regions, severe pressure on water supplies. To feed the world we might need dietary innovations: converting insects, which are highly nutritious and rich in proteins, into palatable food, and making artificial meat instead of beef. The buzz phrase is “sustainable intensification”.
How can the UK be most effective? Regarding climate change, our Climate Change Act sets stringent targets, but even if we meet them we will reduce global emissions by only 1%. However, up to 10% of the world’s innovative ideas gestate in this country. Some of us have argued that we can amplify our leverage, as it were, on solving the climate challenge by massively enhancing research into clean energy systems so that we can accelerate their improvements and the decline in their costs. In that way, countries such as India can afford to leapfrog to a clean network rather than building coal-fired power stations.
I venture a similar argument in the context of today’s debate. If we expand and deploy our world-leading expertise in plant science, and prioritise associated engineering advances, we can substantially enhance the chance that the planet can be fed without devastating the natural world. It would be to our economic benefit in this country if we can get a lead in these key technologies. It is hard to think of a more inspiring challenge for our brilliant young biologists and engineers than using their skills to develop more efficient agriculture for the world’s rising population, without degrading the wonders and beauty of the natural world.
The most devastating consequence of biodiversity loss is extinction—destroying the book of life before we have read it. I would like to end with another quote from EO Wilson, that,
“if human actions lead to mass extinctions, it’s the sin that future generations will least forgive us for”.
]]>However, space activity has burgeoned. We depend routinely on orbiting satellites for communication, satnav, environmental monitoring, surveillance and weather forecasting—and for science. NASA’s budget remains much larger than that of the European Space Agency, but it is mostly spent sustaining America’s pre-eminence in manned flight. On the unmanned front, we should proclaim more loudly that ESA has parity. The successes of Rosetta, Planck, Gaia and Copernicus—all strongly involving the UK—fully match what NASA has achieved. We can be proud of Europe’s publicly funded space effort and should remain key players.
In parallel, the Government should foster commercial projects, supporting launch sites and research and development. They should also promote educational ventures—this is where Leicester and the Open University deserve special mention. We must not forget the influence on young people of such enterprises. Space is second only to dinosaurs in fascinating the young. In coming decades, the entire solar system will be explored by fleets of tiny automated probes, interacting with each other like a flock of birds. Robotic fabricators will construct in space solar energy collectors, telescopes and industrial-scale structures.
Will there be a role for humans? The practical role for them gets ever weaker with each advance in robots, sensors and miniaturisation. It is therefore hard to justify massive funding by taxpayers. Manned spaceflight should be left to privately funded adventurers prepared to participate in a cut-price programme far riskier and far cheaper than western nations could impose on publicly supported civilians. The phrase “space tourism” should be avoided. It lulls people into believing that such ventures are low risk. If that is the perception, the inevitable accidents will be traumatic. These exploits must be sold as dangerous sports or intrepid exploration. By 2100, thrill seekers in the mould of Sir Ranulph Fiennes may have established bases on the moon and Mars. Elon Musk of SpaceX says he wants to die on Mars, but not on impact. We should cheer on these enthusiasts.
We should never expect mass emigration from the earth. Here I disagree with Musk and my late colleague Stephen Hawking. It is a dangerous delusion to think that space offers an escape from the earth’s problems. Coping with climate change is a doddle compared to terraforming Mars. Nowhere in our solar system offers an environment even as clement as the Antarctic. There is no planet B for ordinary, risk-averse people. We must cherish our earthly home.
]]>Climate change is a prominent concern. Under business-as-usual scenarios, we cannot rule out, later this century, catastrophic warming and tipping points triggering long-term trends such as the melting of Greenland’s ice. A child born today has a high chance of living beyond 2100. If you care about that generation and those beyond, you should deem it worth paying an insurance premium now to protect against those worst-case scenarios. As economists such as Stern and Weitzman have argued, these are contexts where it is inappropriate to discount the future at the standard rate that a developer planning an office building with a 30-year lifetime would use. As a parenthesis, stimulated by the noble Lord, Lord Judd, I note that there is one policy context where a zero discount rate is applied: to radioactive waste disposal, where the depositories are required to prevent leakage for at least 10,000 years. That is somewhat ironic, given that we cannot plan the rest of energy policy even 30 years ahead.
And another thing: if humanity’s collective footprint gets too heavy, the resultant ecological shock could irreversibly impoverish our biosphere. A UN report this year claimed that 1 million species were risk of extinction. That is 10% of the total estimated number of species—many are not yet identified. We are destroying the book of life before we have read it. To quote the great ecologist E. O. Wilson,
“mass extinction is the sin that future generations will least forgive us for”.
However, politicians will not gain much resonance by advocating sacrifices now when the benefits seem to accrue mainly to distant parts of the world decades in the future. Even within our own country, there is reluctance to spend enough on disaster mitigation—vaccines, flood defences, et cetera. Unless there is a clamour from voters, manifest in politicians’ inboxes and the press, Governments will not properly prioritise measures crucial for future generations. Sustaining that clamour needs effective campaigning, not just experts, and enlisting charismatic individuals to change the public mindset. To quote the great anthropologist Margaret Mead: “Never doubt that a small group of thoughtful, committed citizens can change the world; indeed, it’s the only thing that ever has”.
I give two examples. The papal encyclical Laudato Si eased the path to consensus at the Paris climate conference in 2015. The Pope got a standing ovation at the UN. He has 1 billion followers, mainly in Latin America, Africa and East Asia. There is no gainsaying his impact nor the Church’s global reach, long-term vision and concern for the world’s poor. More parochially, I doubt that Michael Gove would have become exercised about non-degradable plastic waste had it not been for the BBC’s “Blue Planet” programmes, fronted by our secular Pope, David Attenborough—especially the footage of an albatross returning to its nest and regurgitating plastic debris, an image as iconic as the polar bear on a melting ice floe is for climate campaigners. As many noble Lords have emphasised, it is encouraging to witness more activists among the young. They hope to live to the end of the century. Their campaigning is welcome and their commitment gives grounds for hope.
I close with a thought that strikes me when I visit the great cathedral at Ely, near where both I and my noble friend Lord Bird live. Its builders essentially knew of nothing beyond Europe. Many thought that the world was a few thousand years old and might not last another thousand. Despite these constricted horizons in both time and space, despite the deprivation and harshness of their lives, and despite their primitive technology and meagre resources, they conceived a glorious building that they never lived to see finished and which still elevates our spirits centuries later. What a shaming contrast it would be if, despite our far greater resources and wider horizons, we pursued policies that denied future generations a fair inheritance. Our perspectives should be global and stretch at least a century ahead. Our responsibility to our children, to the poorest and to preserve life’s diversity surely demands nothing less. That is why we need institutional changes to enshrine long-term thinking more firmly in decision-making.
]]>As a parenthesis, I would note that there is one policy context where an essentially zero discount rate is applied, and that is to radioactive waste disposal, for which the depositories are required to prevent leakage for 10,000 years. That is somewhat ironic, given that we cannot plan the rest of our energy policy even 30 years ahead. Consider this analogy. Let us suppose that astronomers had tracked an asteroid and calculated that it would hit the earth in 2100, 80 years from now—not with certainty but with, say, a 10% likelihood. Would we relax and say that it is a problem that can be set on one side for 50 years? People will then be richer and it may turn out that it is going to miss us anyway. I do not think we would. There would surely be a consensus that we should start straight away and find ways to deflect it or to mitigate its effects.
The pledges made at the Paris and Poland conferences are a positive step, but they are not enough , especially if there is an aim to limit the expected temperature rise to 1.5 degrees. The recent report of the Energy Transmissions Commission, co-chaired by Adair Turner, was bullish about achieving the requisite global transition to zero carbon within 40 years. An extra investment of $900 billion per annum would be needed globally. That is a stupendous figure but it is only 0.6% of world GDP over the next four decades. However, that is still of course a massive challenge. Politicians will not gain much resonance by advocating unwelcome lifestyle changes now when the benefits accrue mainly to distant parts of the world and are decades in the future.
Achieving the energy transition will require accelerated R&D into all forms of low-carbon energy generation and other technologies where parallel progress is crucial, especially storage—batteries, compressed air, pumped storage, flywheels, et cetera—sequestration and smart grids. This scenario offers a win-win option for the UK. Implementing our Climate Change Act is important, although it will cut global emissions by less than 2%. But we produce more than 10% of the world’s best scientific research, and we can strive for a global lead and aspire to make far more than a 2% difference to energy R&D.
Solar and wind are front-runners, but other methods have geographical niches. One of ours is tidal energy. Our topography induces especially large-amplitude tides on Britain’s west coast. We should therefore explore tidal barrages and lagoons. Because of intermittency in sun and wind, the long-term goal should be continental-scale DC grids carrying solar energy from Morocco and Spain to less sunny northern Europe and east-west to smooth peak demand over different time zones—perhaps all the way along the belt and road to China.
It is surely worth while for the UK, given its traditional expertise in nuclear energy, to explore a variety of fourth-generation concepts, which could prove cheaper, more standardised and safer than existing nuclear designs. The faster these clean technologies advance, the sooner their prices will fall so that they become affordable to, for instance, India, where the health of the poor is now jeopardised by smoky stoves burning wood and dung, and where there would otherwise be pressure to build coal-fired power stations. It would be hard to imagine a more inspiring goal for our young engineers than to spearhead improved clean and affordable energy.
How can the long-term global goal of a low-carbon world get sustained political traction? How can it compete for political attention with urgent local issues? It can happen, just as other social attitudes have been changed in the past, if individuals with mega-influence can mould public opinion. I have two examples. The papal encyclical Laudato si’ had huge impact, eased the path to consensus at the Paris climate conference in 2015 and gained the Pope a standing ovation at the UN. This week our great secular guru, David Attenborough, has espoused the climate cause at Davos.
The young are far more activist, unsurprisingly, as they can hope to live to the end of the century. Their campaigning is welcome. Their commitment gives ground for hope. To give a parochial instance, I was especially pleased when some of our Cambridge students took an initiative that led to setting up the APPG for Future Generations. Today’s actions—or inactions—on environment and energy will resonate centuries ahead. They will determine the fate of the entire biosphere and how future generations live. We in this country can genuinely take a lead.
]]>Moreover, AI is still at the baby stage compared to what its proponents expect in coming decades. Twenty years ago, few people envisioned the extent to which smartphones and IT have now changed the pattern of our lives, so it would be rash to predict how transformative AI could be in the next 20 years. Already, AI can cope with complex, fast-changing networks, such as traffic flows or electric grids. It could enable the Chinese to gather and process all the information needed to run an efficient planned economy that Marx could only have dreamed of. In science, its capability to explore zillions of options could allow it to find recipes for better drugs or for material that conducts electricity with zero resistance at ordinary temperatures.
But the implications for society, as we have heard, are already ambivalent. If there is a bug in the software of an AI system, it is currently not always possible to track it down. This is likely to create public concern if the system’s “decisions” have potentially grave consequences for individuals. If we are sentenced to a term in prison, recommended for surgery or even given a poor credit rating, we would expect the reasons to be accessible to us and contestable by us. If such decisions were entirely delegated to an algorithm, we would be entitled to feel uneasy, even if presented with compelling evidence that, on average, the machines make better decisions than the humans they have usurped.
Integration of databases by AI systems has an impact on everyday life and will become more intrusive and pervasive. Records of all our movements, our interactions with others, our health, and our financial transactions will be “in the cloud”, managed by a multinational quasi-monopoly. The data may be used for benign reasons—for instance, for medical research—but its availability to internet companies is already shifting the balance of power from Governments to the commercial sector.
There will also be other concerns—about privacy, for instance. Are you happy if a random stranger sitting near you in a restaurant or on a train can, via facial recognition, identify you and invade your privacy, or if fake videos of you become so convincing that visual evidence can no longer be trusted, or if a machine knows enough about you to compose emails that seem to come from you? The report rightly raises concerns about these matters.
A report published in February, prepared with input from my colleagues at Cambridge and Oxford, was entitled The Malicious Use of AI: Forecasting, Prevention and Mitigation. Its focus was on the near-term, and it highlighted three concerns: AI could allow existing types of cyberattack to be achieved with less effort, and therefore by more actors; by use of, for instance, co-ordinated drones, AI could facilitate physical attacks, and cyberattacks could occur on the software of driverless cars; and AI could allow more effective targeting of misinformation, denial of information, surveillance and so forth. Overall, the arms race between cyber- criminals and those trying to defend against them will become still more expensive and vexatious with the advent of AI.
The academic and commercial communities now speak with one voice in highlighting the need to promote “robust and beneficial” AI, but tensions are already emerging, as AI moves from the research and development phase to being a potentially massive money-spinner for global companies.
The committee’s report emphasises the incipient shifts in the nature of work—an issue addressed in several excellent books by economists and social scientists as well as by the noble Lord, Lord Hollick, and others today. Clearly, machines will take over much of the work of manufacturing and retail distribution. They can replace many white-collar jobs: routine legal work, such as conveyancing; accountancy; computer coding; medical diagnostics and even surgery. Many professionals will find their hard-earned skills in less demand. In contrast, some skilled service sector jobs—for instance, plumbing and gardening—will be among the hardest to automate.
The digital revolution generates enormous wealth for an elite group of innovators and for global companies, but preserving a healthy society will surely require redistribution of that wealth. There is talk of using it to provide a universal income. But it is surely better when all who are capable of doing so can perform socially useful work rather than receiving a handout. Indeed, to create a humane society, Governments should vastly enhance the number and status of those who care for the old, the young and the sick. There are currently far too few of these people, and they are poorly paid, inadequately esteemed, and insecure in their positions. It is true that robots can take over some aspects of routine care, but old people who can afford it want the attention of real human beings as well. Let us hope that we never get to a situation when we accept automata as substitutes for real teaching assistants reading stories to children with proper human empathy of the kind the noble Lord, Lord Reid, emphasised.
Not only the very young and the very old need human support: when so much business, including interaction with government, is done via the internet, we should worry about, for instance, a disabled person living alone, who needs to access websites online to claim their rightful government benefits or to order basic provisions. Think of the anxiety and frustration when something goes wrong. Such people will have peace of mind only when there are computer-savvy caregivers to help the bewildered cope with IT, to ensure that they can get help and are not disadvantaged. Otherwise, the digitally deprived will become the new underclass. Caring roles provide more dignified and worthwhile employment than the call centres or warehouses where jobs have been lost. Does the Minister think that it is possible to use the earnings of robots, as it were, to achieve Scandinavian-level welfare where the demand for carers is fully met?
Even if we have machines that can, effectively, interact with the real world, this will not be enough to ensure that they have human empathy. Computers learn from a “training set” of similar activities, where success is immediately “rewarded” and reinforced. Game-playing computers play millions of games; computers gain expertise in recognising faces by studying millions of images. But learning about human behaviour involves observing actual people in real homes or workplaces. The machine would feel sensorily deprived by the slowness of real life and would be bewildered. Only when this barrier can be surmounted—and perhaps it never will be—will AIs truly be perceived as intelligent beings, and if that happens, their far faster “thoughts” and reactions could then give them advantages over us.
Many experts think that the AI field, like synthetic biology, already needs guidelines for “responsible innovation”. Moreover, the fact that AlphaGo Zero achieved a goal that its creators thought would have taken several more years to reach has rendered DeepMind’s staff even more bullish about the speed of advancement. But others, like the roboticist Rodney Brooks—creator of the Baxter robot and the Roomba vacuum cleaner—argue that these projections will remain science fiction for a long time. Be that as it may, it is crucial to be aware of the potential of artificial intelligence, even though real stupidity will always be with us.
]]>I will focus rather less on the communication medium than on the messages that need to be communicated in a context where we can expect far larger-scale and more catastrophic breakdowns and terrorist attacks that we have had up to now. Cities would be paralysed without electricity. The lights would go out, but that would be far from the most serious consequence. Within a few days, our cities would be uninhabitable and anarchic. We know what even an intrepid maverick with cyber skills can do, and there have been warnings from senior officials in this country and the US of how devastating and long-lasting a highly organised cyberattack could be.
Our high-tech and interconnected world is vulnerable in other ways. We depend increasingly on elaborate networks: air traffic control, international finance, globally dispersed manufacturing, biothreats and so forth. Unless these networks are highly resilient, their benefits could be outweighed by catastrophic, albeit rare, breakdowns. Social media can spread panic and rumour, and economic contagion, literally at the speed of light.
Not enough effort goes into minimising these risks, nor, for the focus of this debate, into preparing for how to cope with the aftermath of catastrophic events. There are two reasons for this underpreparation. First, we are in denial. We respond rationally and proportionately to fire risks, for instance, because, even though the chance of our home burning down is small, we have frequent reminders of fires and the damage they can do. We can estimate their probability and therefore the risk.
However, catastrophic events are rare—perhaps unprecedented or newly emergent—so we do not have this experience. We are lulled into believing that they will never happen so we are underprepared. It is an analogue of what is happening in the financial world. Gains and losses are asymmetric; many years of gradual gains can be wiped out by a sudden loss. Likewise, in cyberdisasters and those that might be caused by bio-error or bioterror, the risk is dominated by the rare but extreme events. The magnitude of the worst potential catastrophe is growing unprecedentedly large. Too many people are in denial about this; it needs to be higher up in public policy and attention.
The second reason for underpreparation is political reluctance to spend money in ways that may prove nugatory, as is likely to be the case for any low-probability but high-consequence scenario. For instance, in some years when a flu epidemic has been predicted, the Government have prudently stocked up on the appropriate virus but then been unfairly criticised for waste if some was not needed. We must overcome that mindset if we want to prepare for these extreme events. It is reassuring that the Government have given priority and resources to cyberdefence, where there is an arms race between the attackers and the defence and it is unclear whether the defence will always win.
It is surely not scaremongering to raise concerns about human-induced risks from bio-error or bioterror. We know all too well that technical expertise does not guarantee balanced rationality. The global village will have its village idiots, and they will have global range. The spread of an artificially released pathogen cannot be predicted or controlled. The rising empowerment of tech-savvy groups, even individuals, by biotechnology will pose a growing intractable challenge to governments and aggravate the tension among freedom, privacy, and security. Most likely, there will be a societal acceptance of a shift towards more intrusion and less privacy.
Before closing, I want to focus on nuclear threats. Even a stalwart establishment figure such as William Perry, the former US Defense Secretary, has expressed concern about scenarios involving terrorist nuclear weapons. Be that as it may, there have already been nuclear incidents that involved not explosions but serious radiation release. Such nuclear accidents hold lessons about the appropriate response—evacuation versus staying put, for instance—and messages that should be sent. In 2011, the Japanese tsunami claimed 30,000 lives, mainly through drowning. It also destroyed the Fukushima nuclear power stations, which were inadequately protected against a 15 metre-high wall of water and sub-optimally designed. For instance, the emergency generators were located low down, and were inactivated by flooding.
Consequently, radioactive materials leaked and spread. The surrounding villages were evacuated, but this was done through unco-ordinated messages and in an unco-ordinated way. Initially, just those within three kilometres of the power stations were evacuated, then those within 20 kilometres and then those within 30, with inadequate regard for the asymmetric way in which the wind was spreading the contamination. Some evacuees had to move three times and some villages remain uninhabited, with devastating consequences for the lives of long-term residents. Indeed, the mental trauma and other health problems, such as diabetes, have proved more debilitating than the radiation risk. Many evacuees, especially elderly ones, would be prepared to accept a substantially higher cancer risk in return for the freedom to live out their days in familiar surroundings. They should have that option. Likewise, incidentally, the mass evacuations after the Chernobyl disaster were not necessarily in the best interests of those displaced.
In Japan, it was the tsunami itself, not the nuclear accident, that caused the major death toll. The public fear of radiation is enhanced by a special dread factor and a feeling of helplessness. As a consequence, all nuclear projects are impeded by disproportionate concern about even very low radiation levels and the cost is raised by overstringent clean-up requirements. To offer a specific recommendation, were a city centre to be attacked by a dirty bomb—a conventional chemical explosion laced with radioactive material—some evacuation might be needed, but, just as in Fukushima, there is a risk that present guidelines would mandate a response that was unduly drastic, both in the extent and the duration of the evacuation.
The immediate aftermath of a dirty bomb incident is not the right time for a balanced debate. That is why this topic needs a new assessment and wide dissemination of clear and appropriate guidelines of the risks to different categories of people. We need discussion of a proportionate response and how to communicate it.
Finally, it is clear that such threats are growing in their variety and severity. We need to devote more resources to reducing our vulnerabilities, planning the optimum response and communicating it. The past is a poor guide to the future when fast-changing technologies are involved. There is a salutary mantra: “The unfamiliar is not the same as the improbable”.
]]>I declare an interest as a member of Cambridge University, which spends £5 million a year on access initiatives. A special initiative targets young people in care and we are discussing a transfer year programme. Last year, 22% of our home admissions came from an ethnic minority. We take background into account in admissions, though we do not have quotas. Incidentally, we took 58 black students—not many, but a third of all black students in the country who had two A* grades. Cambridge gives a bursary to one home student in four; increasing this is a prime goal of our current fundraising. But Oxbridge could do more to widen its appeal. I would favour, for instance, a cut-back in activities that sustain a Brideshead image of extravagance and entitlement. However, even after all realistic outreach efforts, there will be high-potential young people who, through unfavourable circumstances, do not reach the bar at 18. That is why it would send an encouraging signal if Oxbridge were to reserve a fraction of its places for students who do not come straight from school but have caught up by earning credits online, at another institution or via the Open University. Indeed, I suggest to the Minister that there is a case for formalising some system of transferable credits across the whole HE system.
Some critics of Oxbridge cite America’s Ivy League as a model to which we should aspire. I would strongly contest that claim. A recent survey revealed that more than 20% of the Ivy League’s intake had families in the top 1% of income, whereas only a few percent were in the bottom 60%. Moreover, Harvard overtly offers an inside track to the children of alumni or donors—that is something that we in Oxford or Cambridge would absolutely not countenance. What makes Cambridge and Oxford special is that they combine the strength of world-class research universities with the pastoral and educational benefits of the best American liberal arts colleges. They are unique worldwide in doing that. That is why, according to a recent HEPI report, their students show a higher satisfaction rating—and work harder—than those studying elsewhere. Incidentally, in terms of student satisfaction, HEPI found little difference between Russell group and non-Russell group universities. This is not surprising, because league tables focus on research, which is, at best, weakly correlated with teaching quality.
There is in any case a need for more diversification among universities. They should not all try to compete in the same league table. So let us hope that some universities, right across the UK, emulate US liberal arts colleges in offering high-quality teaching, and thereby counterbalance the special allure of Oxbridge to students. Moreover, there is too sharp a demarcation with further education, aggravating concern about our skill levels, apprenticeship quality and so on, as compared with other advanced countries. Let us focus on these broader deficiencies, rather than just on Oxbridge.
]]>