Advanced Research and Invention Agency Bill – in a Public Bill Committee at 2:01 pm on 14th April 2021.
Q We will now hear evidence from Professor Dame Anne Glover, president of the Royal Society of Edinburgh and special adviser to the vice-chancellor of the University of Strathclyde, and Tabitha Goldstaub, co-founder of CognitionX and chair of the AI Council. Could you both introduce yourselves, please? We have until 3.45 pm for this session. Welcome and thank you.
Professor Glover:
I am Anne Glover. I have just finished my three-year term as president of the Royal Society of Edinburgh, so I am no longer president of Scotland’s national academy. I am a molecular biologist by background. I have researched how we respond to stress at a molecular level, and I have looked at the diversity of microbes in the environment. I am very interested, and have worked in translating knowledge gained from research into policy making, and as such I was chief scientific adviser for Scotland from 2006 to 2011 and chief scientific adviser to the President of the European Commission from 2012 to 2015. I am currently at the University of Strathclyde.
I am Tabitha Goldstaub, the co-founder of CogX and the chair of the UK Government’s AI Council. We are an independent council created in 2018 as part of the industrial strategy’s AI sector deal. We support the Government via the Office for Artificial Intelligence, our secretariat, in offering independent expert advice, as well as community engagement. I am really here to share the thoughts of those I polled and workshopped with from that AI community. Thank you very much for inviting me.
Thank you very much, both, and welcome. Our first set of questions will be from shadow Minister Chi Onwurah.
Q Thank you very much to our guests for joining us and bringing your experience. It is a real pleasure to have an all-female panel. It is a real rarity when it comes to discussions of science generally.
There is some confusion about what ARIA should be. Should it be focused on cutting-edge research, should it be about the transformational translation of existing research, or should it bring the two together? What I would like to know from both of you, with your wide experience, is what you think ARIA’s goal or purpose should be. What problem should it fix?
Ms Goldstaub, you have experience of artificial intelligence, which could be a critical area of research. Do you think it is going to change the nature of research, how we research and how scientific research occurs? How should we envisage ARIA responding to that?
First and foremost on your point around focus, really it needs to be about imagining how funding is done to find the breakthroughs that others describe as being at the edge of the edge, with freedom—testing, for example, things like the lotteries, the grants, the speed of contracting, loans, prizes and all the things that we have heard about throughout the whole of today. I really think that ARIA is about exploring these ideas.
If you are looking for a single focus, I believe wholeheartedly in Mariana Mazzucato’s mission-driven approach to innovation. The AI community was incredibly catalysed by the industrial strategy grand challenges. And of course there are these urgent missions. Alondra Nelson said in her first speech post being nominated by Joe Biden that all science should address social inequality. That said, it is still unclear to me if there needs to be one challenge enshrined in law or whether the programme managers should have the freedom; I think we will hear more from others on what their decision is there. The most important thing is that I just kept hearing time and again from the community I spoke to, similarly to what the gentleman from DARPA said, that this is a time to serve. People really want to find a place to do research that saves people’s lives, especially in the AI eco-system.
I think that your question about the impact that AI has on research is a very good one. AI is impacting research, just as it does all areas of the economy, both in disrupting the fabric of its own self and advancing research. We have seen AI create state-of-the-art information-retrieving capabilities, sift through vast amounts of data and speed up the publishing process, so it is changing the process of research, but also in itself it is obviously making discoveries and scientific advancements.
Three per cent. of all peer-reviewed journals are now AI-related and this new trend of AI plus another science is really booming. So biology is currently experiencing its “AI moment”. We saw in the State of AI report that there is a 50% year-on-year increase in papers; 25% of the output since 2000 is a biology and AI collaboration. DeepMind’s AlphaFold is a really good example of that. Demis Hassabis has publicly said that one of the drivers at DeepMind is AI that could win a Nobel prize, so he has already set the bar for an ARIA.
Q Thank you very much. And Professor Dame Anne Glover?
Professor Glover:
You were asking if the UK’s ARPA or ARIA should have a single purpose, or focus, and in terms of subject area, I would argue not, because you do not know where the good ideas are coming from. It would be really valuable to have quite a wide and informed debate from a very broad spectrum of interests as to where the calls should come from regarding ARIA. Therefore, when they are looking for a call for research, what are the big areas? In a way, this is quite similar to looking at the grand challenges, which Tabitha has already mentioned.
However, there is an opportunity here in looking at grand challenges, because who decides what those grand challenges are? Voices that are very frequently missing in that debate are citizens’ voices. If I think of some of the big grand challenges—certainly a number of those were funded at the European Commission—often they would be narrowed down, so that there would be three absolutely superb proposals in quite different areas of research, which would have come through the review process. Then it would be a decision about which one we should fund. And that is an ideal time to say to citizens, “What is it that you’re interested in?”
Of course that makes the research very relevant; it would tend to make it translatable into the economy, the life/wellbeing environment and so on; it also then has a substantial buy-in from citizens. That is not unimportant, because at the moment we are enjoying a big buy-in from citizens around science, as they see the relevance of what funding science over a period of years actually does, in being able to deliver us—in this case—from a pandemic, and of course there is climate change there, as well. So that is important.
The focus of the purpose needs to be crystal clear, so that there is no confusion with other funding agencies. That would just lead to mini-chaos, or things falling through the gaps and being shuffled around, which is not at all helpful.
The last thing I would say in this context is that there is an opportunity to look at how you fund. For perhaps quite understandable reasons, current research funding is quite formulaic; it is box-ticking to get the funding. What sort of projects will be funded? Normally, low risk. There is an opportunity to look at high risk, high reward. I would hope that the leadership of ARIA considered that, to fund things that are really innovative, you yourself have to be innovative. We will need to think and be imaginative about how you go about sourcing and funding projects, so that we do not just get a modified version of what we are currently seeing, but can fund in a way that is more bespoke. By doing that, we are opening up what I hope would be exciting possibilities.
Q Just to follow up with two brief questions. The points you raise, though different, have raised similar questions in me. First, in terms of deciding what areas of research and challenges should be addressed, what if we rely on the leadership, as you suggest, Professor Dame Anne? What ARIA seems to support is what I would call the “big man” model of research: choose five or six great men—generally, they are men—and give them the freedom to be geniuses and to choose what they want to research, to have, as Dominic Cummings says, “extreme freedom”.
Ms Goldstaub, you say that AI is changing how research happens, and also the scale, I would say. Is it possible that we can find five or six great people who know all the different potential areas of research, who can make these kinds of choices on behalf of the British people, using public money, and can integrate the changing nature of research, while at the same time being innovative and having, we would hope, diversity of thought and hopefully also of gender, region, discipline, etc? Is it possible to find five or six people like that? What elements of the structure of ARIA are important to promote that?
It is totally possible to find those people. I cannot speak across all science, but I definitely feel there is a generation of young, mid-career AI talent that feel they are in a sort of gap—the fuzzy middle, as Andy Hopper calls it. They are asking themselves, “What am I doing? The planet is burning, I don’t want to work at the big banks or the big tech giants.” They want the academic freedom of the universities but they do not want to work alone. They see the financial reward of successful start-ups, but they want to take long-term bets. Generally, they want to make the world a better place.
It is people like that who fit into the mould that we are looking for. I worry also about the lone genius model. We are well beyond individual success being seen like that. This is all about community. One of the things I have heard time and again is that people do not want to be funded as individuals but as groups of people. It is a community that would come together around a programme manager that is really important.
Yes, we have to find four or five of those individuals, but it is the people who work with them who make a huge difference. It is the open science, open data and spirit of openness that will go a long way to finding those people who will culturally fit and enable us to engage well beyond just those five individuals and find the edge-of-the-edge breakthroughs that we really need. I hear people saying, “I have ideas that I just don’t even put forward right now; they are unthinkable, because they are unfundable.” Once people can come together, you start to unlock that, which saves you from this lack of diversity where you are just funding individual after individual and effectively asking people to compete with each other.
Q Thank you. Professor Dame Anne?
Professor Glover:
Just for easiness, can I ask Committee members to just call me Anne? Otherwise it is a bit of a mouthful.
On the idea of five or six individuals, I would caution on that slightly. I am partly bought into the idea, but if you are identifying five or six individuals, you have already pinned your colours to the mast in what you want. You have already prejudged the areas you want to work in or the ideas that you are interested in.
Where the five or six people might be really important to identify is for the running of ARIA itself. Whether it is the overall director of ARIA or the research leaders in the different themes that might be funded in ARIA, they will be key people and they need to be credible, trusted, very effective at communication and really open-minded. In my view, a large part of the success of ARIA will come from having quite inspirational leaders throughout.
In terms of how you fund and who it is that you are funding, I would go back to what I was alluding to earlier. There needs to be a big conversation about this. There are often older men who have got a reputation in research, so they are naturally the ones we go to, but as I know from bitter experience, as you get older, sometimes your thinking closes off in particular areas and you are less open to ideas. I am thinking of Professor Donald Braben, whose comments the Committee would probably be very interested in. He set up a venture research unit in BP, back in the ’90s I think, and has written several books about this kind of blue skies research area.
What Braben said is that we should look for “irreverent researchers and liberated universities”. Do not look for people who have a research area that we think is really important and we must go there. Debate widely among researchers, of course, but also Government Departments, devolved Administrations, foresighters, businesses, citizens. Let us imagine the future. ARIA could be the stepping stone, if you like, to inventing that imagined future. For a future to exist, you have to imagine it in the first place and you have to convert it into what you would like. There are lots of different ways of doing that. With inspirational leadership, you can move towards that. You can probably increase dramatically your chance of getting it right by having an irreverence around what you do, and not the usual measures of success.
Q Thank you to our great witnesses. I have one question for both witnesses. What is the importance of giving ARIA independence from Government and Ministers, compared with other parts of the R&D system?
Professor Glover:
I would argue that there is huge value in that. Obviously, the funding is coming from Government, but by giving it freedom from Government you might also be giving it the freedom to fail in many ways, and that is exceptionally important. If it is seen as very close to Government—whichever Government is in power—it potentially becomes a bit like a political football, either in what is being funded or in the direction suggested for where ARIA funding should go.
If there are notable failures of funding, which you would expect if it were a high-risk, high-reward funding agency, political opponents will also say, “Well, look, this is a complete disaster under your custodianship. Here are all the failures.” You just want it to be separate from that. It is also part of trying to embrace the unthinkable, if you like, in terms of the research we do and the areas we go into. Necessarily, those will sometimes be difficult areas, and not ones that you should expose Government to either. In the spirit of opening everything up, I would say that keeping that independence is extremely valuable.
I totally agree with what Anne just said—I would have said exactly the same thing. I think that the separateness and independence are really vital to the success of ARIA. The only thing that I would really think about adding here is how important it is that ARIA does have a relationship with Government, because it will need to have many customers, both private sector and public sector. The programme managers will need to create those bonds with central Government Departments individually.
I think that a commitment from Government to remain independent but to become good customers is very important. The health and transport sectors are good examples of where that might work. What is different is that a surprising number of these next big scientific fields, and these next big breakthroughs, such as artificial intelligence, are going to depend on systemic transformation, where you cannot separate the technology from the policy and regulation.
So yes, ARIA has to be independent, but it also needs to ensure that it works really closely with central Government and with regional and local government. Local government spends about £1 billion on procurement, and cities are key investors in infrastructure, so finding a good link with local government, as well as with central Government, is important. This will hopefully end up creating, as Anne suggested, a way that people feel part of this. Regional strengths deliver benefits to actual localities. Even if it is within the next 10, 15 or 20 years, it is really important that government feels part of that, even though ARIA is independent.
Q Thank you, Tabitha and Anne, for your detailed responses so far. I have a couple of points, if I may. I think it is safe to say that you seem broadly in favour of ARIA, and you think it will perhaps fill a void. In terms of the resources that ARIA will have, we heard earlier today about the benefits of being a small, agile agency, and £800 million is being allocated. Do you feel that is sufficient for ARIA to meet its needs?
On independence from Government, from looking at your bio, Anne, I can see that you have worked for a few public agencies. If ARIA does not have the public contract regulations and freedom of information in place, will that free it to do what it needs to do? Should we see that as a positive as opposed to a check imbalance, given that we are referring to public money?
Professor Glover:
I will deal with that point first—it is an exceptionally interesting point. Initially, when I saw that it might not be subject to FOI, I was thinking, “What are the pros and cons of that?” There is one thing that needs to be fundamental in ARIA, and that is an openness and transparency about what it is funding and why, and how it is doing it. For most things—UKRI would be similar to this—what you provide information on obviously cannot be something that would break the General Data Protection Regulation or that would be commercially sensitive. That should hold exactly true for ARIA as well.
There needs to be some thinking around the whole aspect of openness and transparency, because that brings along with it trust and engagement. If there were any suggestion that Government funding was going into ARIA and it was being syphoned off into particular areas, and we could not find out what those areas were, there would be nervousness. People would, quite rightly, object to that, so there would have to be some greater thought given to how the agency is able to be open and transparent. It might be writing its own rulebook in that area, about what it will provide information on and what it should not.
On whether £800 million is enough, you are asking a scientist and a researcher here, so no, it is never going to be enough, but we have to start somewhere. I cannot make a direct comparison with DARPA’s funding, which is about $3.5 billion or $4 billion per annum, but I might be a bit out of date on that. It does not seem unreasonable to me to start at that level of funding and to start off on the journey to see what is and is not working, where there is greater demand and where you might need more funding to meet it. What you would want to see is that this was such a success that there was substantial demand for funding.
On the other hand, you do not want to get into the situation that standard research funding has—I have certainly visited it many times during its lifetime—where you are putting in 10 research proposals to get one funded. That is an enormous waste of everybody’s time, including the agency that is funding the research. There needs to be a balance between how much money is available and what you hope to do with it.
The last thing I would say is that how that funding is apportioned needs to be carefully thought out, because there needs to be some security of funding. Traditionally in the UK, we have normally had three-year tranches of funding. Long before the end of the three years you have to try to think about how you get continuation of funding. You might hope that ARIA could look at a different model of funding, which might span different timescales depending on what the nature of the project was.
Many projects, particularly ones that are quite disruptive in thinking, will not deliver in a short period—two or three years—of time. Some could do, but some will not, so there needs to be that security of funding over different annual budgets to allow the investment over a period of time.
I will start with the amount of funding. I see the £800 million as just a start. I think that £800 million is sufficient as long as ARIA works in partnership with Government Departments, the private sector and other grant makers. ARIA should not be restricted in matching or exceeding the Government funding with funding from the private sector. There are people in the community that I have spoken to who think that for true intellectual and financial freedom, ARIA should be able to more than double the Government funding. It was good to see in the Bill that the potential for ARIA to take equity stakes in companies and start-ups in a venture fashion could lead to increasing that part over time and making more funding decisions. I see the £800 million as really just a starting point.
On freedom of information, I agree with Anne that openness is key. Transparency fosters trust, and I do not think there is any need to stop freedom of information. We need to keep freedom of information to help with the efforts for connectivity. If the community are going to feel part of ARIA and will it to do good things, they need to be able to use freedom of information. I cannot see any argument against this for the administration costs. Earlier this morning, we heard Ottoline Leyser say that UKRI gets 30 requests a month. If ARIA is 1% of the budget of UKRI, perhaps it could get 1% of the requests, which would be fewer than four a year. I cannot see it, for that reason.
The other reason why there is a desire for secrecy and no FOI is that people traditionally are not comfortable to innovate and fast fail in the open, but that is changing. DeepMind has teams. I have spoken to Sarah Hunter, who is at Google’s moonshot factory, X. She explained how they started in secret and everything felt so appealing, to protect people from any feeling of failure, but what they learned is that there are so many other much better ways than secrecy to incentivise people and to give them the freedom to fail. Actually, allowing for more transparency builds much more trust and encourages more collaboration and, therefore, better breakthroughs.
Anne has spoken about the community. I definitely will speak again about the community, but in addition to the community engagement, ARIA will need to have a press department and media engagement teams that are separate from BEIS, separate from the grid and separate from the Government, to enable it to be agile in its communication and foster a two-way conversation. In order to answer your question, I really think this is the key point: openness and transparency create more trust and more breakthroughs.
That is really helpful. Thank you, both.
Q Good afternoon and thank you for joining us and for your excellent contribution. Anne, you made a very interesting point about the independence of ARIA, to avoid it being used potentially as pointing at political failure. If you are investing in high-risk, high-reward research, there will be failure—that is undoubtedly true. May we ask for your advice on how we should measure the metrics of an ARIA over the early years, before potentially there is any output that has demonstrated a transformational benefit to society? On top of that, could you give us some advice on advising project managers on how they should go about selecting projects to explore? Should it be just on the basis of interesting science, or should there be a vision of the commercialisation of that science at the end, to motivate them? We are only going to be able to fund a certain number of projects, and presumably applications will outstrip the funding fairly quickly.
Professor Glover:
How we measure success in the early years is a very important question. I am not going to give you an exact answer, but what I might say is that maybe we should not try. That would be unusual, wouldn’t it? That is what I meant earlier about not just following the formula of, “You need to tick these boxes to demonstrate success.” Of course, you would hope that whoever is leading ARIA would have an idea of how you are developing the innovation ecosystem that will be supported by ARIA. They might have some ideas about numbers of applications, where they are coming from, and having a good look at and analysing that, and looking at the amount of interdisciplinary or multidisciplinary research that comes forward. That is always quite hard to fund. Historically, when I have been involved in such things, interdisciplinary research tends to get kicked around different agencies: “This is more for you.” “No, this is more for you.” Everybody is worried about their budget and thinks, “If you fund it, we won’t have to fund this from our budget.” Thinking about the number of applications that could come from a broad range of different disciplines—that would be good. I am not answering your question directly. I am just saying that it is very easy to say, “Let’s have a way of measuring success,” but sometimes that can be stifling.
It is a bit like—perhaps not in the years timescale of ARIA—how it is around the time of year when we plant seeds in our garden or wherever. If you want to measure how well a seed is germinating, if you keep pulling it up and having a look at it you are really going to set it back, so sometimes you just need to think, “I’m hoping that in four or five months’ time this is going to be a broad bean plant with broad beans on it. I just need to wait and see.” I know that that is difficult to do.
The second thing you asked is about commercialisation. I cannot for the life of me remember who said this, but someone once said that there are two types of research: applied research and research not yet applied. That is quite true. There might be some areas where you think that there is a very easy market for this, but if we look back and learn from experience we find that an awful lot of research has been developed. The whole area of medical diagnostics, for example, was pure research. There was no commercialisation; it was just a fundamental biological problem that was being investigated. Some of the outcomes of that research led to molecules called monoclonal antibodies. It is quite a beautiful specific diagnostic—supremely sensitive—that can pick out particular molecules of interest that might tell you if you have a particular disease or have been exposed to a particular compound or whatever.
In renewable energy or an area around that, you might understand that there will be a lot of potential commercial partners and opportunities. In some other areas, perhaps not. This might be an opportunity to think about what the relationships would be like between ARIA and existing research funding, because it might be part of an ecosystem. I would hope that there were distinct roles for UKRI and ARIA but very good communication between the two, as well as very many other stakeholders, in order to identify areas that might not be suitable for UKRI funding but that might have a strong commercial or development potential that ARIA would be much more adept at supporting.
It is a pleasure to serve under your chairmanship, Mrs Cummins. Anne, you talked about citizen buy-in. That would take an element of trust, so my two questions are around that. What could or would good transparency look like without stifling innovation, in both of your opinions? Secondly, if we do not have FOIs and we do not know precisely how this will be reported to us, do we need an ethical baselineQ to ensure that we are spending public money on the greater good?
Professor Glover:
On the citizen buy-in, I think that would be reasonable to consider achieving. I do not think that it would be insurmountably difficult in many ways. If I give you the example of some of the grand challenges that were funded at European Commission level, it was getting down to three brilliant projects. Which one will we fund? If the European Commission made the decision about which one was going to be funded, inevitably different member states would complain: “Why is that getting funded in that member state? This other project was just as good.”
All sorts of problems can arise. Whereas, if you asked European Union citizens which one they would like to be funded, they would say what matters most to them. That is quite an interesting insight into the mind of the European citizen, or it would have been, in that particular instance.
I do not think you are in any way betraying confidences; you are talking about whether it is a project looking at delivering limitless amounts of sustainable energy, or a project in mapping the functioning of the human brain, so that you might be able to exploit that in other ways. You are not saying how you are going to do those things; you are not revealing confidences or information that would be inappropriate or undermining of those doing the research. I think we might be worrying needlessly about that.
As to the ethical baseline, of course this has to be ethical. Tabitha and I are probably agreeing too much with each other, or perhaps we are going back to the same thing. If you are not open and transparent, you will have problems. That is just not rocket science. For example, there are many agencies that are not part of Government but that might receive governmental funding. Scotland’s National Academy, the Royal Society of Edinburgh, is one of those. We are completely independent from Government. We get funding from the Scottish Funding Council, which gets its money from Government. We are not subject to FOI requests but we voluntarily behave as if we are. If we did not do that, people would say, “They’re being directed by Government, so the reports that come out of the RSE will be influenced by Government.”
If we say, “This is how we approach it,” and if somebody comes to us and asks for information, we behave as if it were an FOI. It has never been too onerous. The only onerous time for me with FOI requests was when I was chief scientific adviser to the President of the European Commission, when it became unrealistic, because I had such a small team and there was such a lot of FOI requests. Generally, that is the direction we should be moving in. You do not want to hobble a new agency by making it seem that any aspect of it is secretive. To be able to demonstrate ethical compliance, you need that transparency.
Ethical transparency is key, but we also have an opportunity with ARIA to set a robust, rigorous ethical review process that is fit for the AI era. We do not currently have that.
There has been a tremendous amount of attention on the public-facing ethical principles and frameworks for assessing AI products, but relatively little on the frameworks and practices for assessing research, or how to launch and manage a data science and AI ethics review board, in any way that would cut across disciplines, organisational, institutional or national boundaries, as ARIA would need to.
If ARIA can work with others, such as the Health Foundation, which is in collaboration with the Ada Lovelace Institute, or the Alan Turing Institute, on this problem, ARIA could achieve its mission responsibly, become a beacon for other ARPA-like programmes, and tolerate failure much more safely; because ultimately we need to break new ground and to do so with an ethics review, specifically with research that has anything to do with artificial intelligence. It would enable us to set real international standards, if we can get that right. It is both a risk and a huge opportunity for ARIA.
Virginia Crosbie. I am afraid this will have to be the last, very quick question.
Thank you, Chair. It is a pleasure to serve on this Committee. I, too, thank the panel. Tabitha, it is lovely to see you again. You are an inspiration to so many, especially women. Q My question relates to both your expertise and experience in encouraging the next generation of visionary innovators. What do you see as ARIA’s role in future-proofing the next generation? In Anne’s words, they are the future generation of irreverent researchers. How can we ensure that that is spread equally across the four nations of the UK?
Anne made it so clear that it has to be about engaging with citizens—directly with citizen scientists, but also with citizens who do not care about this yet; we have a real opportunity to excite them. A lot of people say it is really hard, but my answer to that is that it cannot be harder than protein folding. Ultimately, the big challenge for ARIA is to engage with those citizens.
Professor Glover:
Briefly, of course I agree with that, but the biggest challenge might be—this will help in engaging with citizens—being up front right at the very beginning that we expect failure, and that failure is part of the measure of success for an agency like ARIA, because if you were not taking any risks, you would not get any failure. The challenge is that, culturally in the UK, and quite differently, I think, from North America, we see failure through an emotional lens, not a scientific lens, whereas I think the opposite is the case in North America. We need to think about that. In a way, just talking about it and saying that that is the case makes it easier for people to understand that we need to fail in order to get the big rewards.
Order. I am really sorry, but I am afraid that that brings us to the end of the time allocated for the Committee to ask questions of this panel. I thank the witnesses on behalf of the Committee for their evidence.