The next item of business is a debate on trustworthy, ethical and inclusive artificial intelligence: seizing opportunities for Scotland’s people and businesses. I ask members who wish to speak in the debate to press their request-to-speak buttons.
I call Richard Lochhead to open the debate.
Today, we are debating a subject that is already having a profound impact on our lives, our society and our economy, and is being hotly debated right now across the world due to its potential future implications for our planet and humanity.
Rapid progress in the development of artificial intelligence and the prospect of its becoming more and more advanced and powerful are leading to some hard questions for the world. Indeed, this debate takes place against the backdrop of international authorities scrambling to respond to the fast evolution of AI with, for instance, European Union and United States law makers meeting this week to discuss a draft code of practice prior to discussing what regulation might be required in the future in order to address the risk.
Although recent days have witnessed big personalities in the tech world, including AI pioneers, warning of existential threats that could arise in the future, including even a threat to humanity itself, others are more optimistic and point to the benefits for the world—for our economies, productivity, healthcare, education and general quality of life. For instance, mundane tasks could be carried out by AI to allow citizens to focus on more fulfilling work or leisure activities.
Liz Smith has made a fair point, on which I hope we all agree. I will address the matter in my remarks.
It is our duty as parliamentarians to try to navigate the risks and opportunities, and to consider the consequences of AI, which probably no one anywhere fully understands, including even those who have built the technology.
AI has been with us a long time. Recently, we have all become familiar with voice recognition and facial recognition software, to give just a couple of examples, and further major strides are now under way. The public release of so-called generative AI tools such as ChatGPT—which I have not used to write my speech—means that cutting-edge AI is now at the fingertips of everyone who wants to use it, and it is spreading fast. It took three and a half years for Netflix to get 1 million users. For Instagram, it took two and a half months. For ChatGPT, it took five days.
That has triggered a heated worldwide debate on how to maximise the benefits of the technology while managing its risks. In the past year or so, researchers have found that just by making AI models bigger, they become able to generate answers to many questions in a way that resembles human answers.
All of that is not just harmless fun; generative AI tools will have an impact on jobs, for instance. To give one example, they could automate many tasks in the creative industries. That is not to mention the fact that they were trained on billions of images on the internet with little regard being paid to the intellectual property and livelihoods of their human creators.
Bias, which I will come on to, is one of the here-and-now threats; it is not something for the future. Martin Whitfield has made a good point about why the issue is a topical one that we have to address.
Many professions can be affected. OpenAI claims that GPT-4 can achieve the same as a top 10 per cent law student in bar exams. Generative AI tools will also require rethinking of education assessment methods, because they can write essays on a wide range of topics.
There is also a more sinister aspect of AI. AI tools will make it much easier to spread large amounts of false but convincing information, which could undermine democracy. That will also facilitate cybercrime and, potentially, other types of crime.
AI is powered by data. The tech giants from silicon valley have been fined again and again for failing to respect people’s privacy and data rights.
However, it is important not to lose perspective on AI. Most experts do not believe that it will be able to supersede human intelligence without several new breakthroughs, and no one knows when they could happen. Currently, talk of an impending singularity, which means machines thinking for themselves without needing humans, still involves quite a lot of fiction.
Essentially, for now at least, AI is a very powerful tool. It is an important but disruptive tool that many people compare with the invention of the steam engine, for instance. It is up to us, as a society and as a country, whether it is used for good or bad.
In a sense, AI is just the latest technology that seeks to replace human activity. Some of its features, such as opaque systems that make decisions on our behalf, are not necessarily new. We must therefore look at the issue from first principles, and we must ensure transparency, accountability and visibility of the things that AI does. If we start from that principle, maybe that suggests a way forward. Does the minister agree with that insight?
I do agree with that, and I hope that Daniel Johnson will note the motion. Those principles are reflected in the motion that we are all signed up to debating.
As Daniel Johnson has alluded, as in all previous technological and industrial revolutions, there are winners and losers. It is the job of democratic Governments to ensure that the benefits are spread as fairly as possible and that the risks are controlled—that needs to happen and it needs to happen now. AI is with us and cannot be uninvented. Well-publicised calls for Governments to pay attention to the hypothetical long-term risks of AI should not, as I said a few moments ago, distract us from its real risks, including discrimination because of bias—which Martin Whitfield mentioned—the negative impact on certain jobs if those professions do not evolve, and election manipulation. It is clear that intervention is needed. Even the tech giants around the world that have made AI what it is today are calling for Governments to intervene. Even if there is a suspicion that they are doing that because they want to pull up the ladder from those who are coming behind them, it is an important point to take into account.
In the midst of the worldwide debate and the uncertainty, disagreements and fears, it is important to understand that Scotland, fortunately, is not suddenly waking up to AI. We start from a solid base from which to make the right choices and reap the benefits of AI while controlling its risks. Our universities’ AI research and teaching have been ranked as world class from the start. Data that was released last month by Beauhurst shows that Edinburgh is the top start-up city in the UK outside London, with 12.3 per cent of companies working in AI, digital security and financial technology. We have long recognised the importance of AI. In 2019, we committed to creating an AI strategy for our country and presented and debated our plans in the chamber. Thereafter, our 2021 strategy laid out a clear path for Scotland to shape development and use of AI in a way that is trustworthy, ethical and inclusive.
To deliver that vision, we set up the Scottish AI Alliance, which is a partnership between the Scottish Government and the Data Lab, which is Scotland’s innovation centre for data science and AI. The alliance provides a focus for dialogue and action with industry innovators and educators in order to build the best environment in which to encourage growth and investment. It plays a key role in enabling meaningful two-way dialogue with our citizens to ensure that we build an AI economy and society that protect citizens rights and in which no one is left behind and everyone can benefit from and contribute to AI.
Specifically, the alliance is developing a range of tools to help to inform as well as to educate people, while actively seeking citizens’ input. One example is the recently launched Scottish AI register, which offers a simple and effective platform for the public to both understand and have a say in how AI is used to make decisions and deliver public services.
We are also delivering an AI and children’s rights programme, in partnership with the Children’s Parliament. We are working hard to ensure that our workforce has the skills that are required to power a thriving AI-enabled digital economy. In the latest ScotlandIS Scottish technology industry survey, Scottish companies continue to rank AI in their top three greatest opportunities, while 46 per cent of businesses indicate that they need additional AI skills in order to grow. An important element of our work is the “Digital Economy Skills Action Plan 2023-2028”, which was recently published by Skills Development Scotland. We have to continue to address the gaps.
Does the minister believe that the Scottish Government is supporting public bodies and local authorities in a way that prevents them from being risk averse? Are they supported to lead on and adopt new technologies in order to make sure that we do not experience those negative impacts?
Balancing the risks against the opportunities is at the heart of the debate about AI in Scotland. Going forward, we have to get that right, which will involve all parts of the public sector, including local government.
We have to equip our citizens and workers with the technical skills as well as with the broader commercial, ethical and human skills that are needed to make AI a success. We also have to tackle diversity in the workforce. As an example, we will support the DataKirk’s Scottish black talent summit later this year. To help to raise awareness of AI across our entire nation, later this year, the AI Alliance will launch a free online course called “Living with AI”.
We need to embrace the unprecedented economic opportunities of AI, as we did with previous scientific and industrial revolutions. We are doing that by making strategic investments in Scotland. That includes the more than £24 million that has been invested in the Data Lab—Scotland’s innovation centre for data science and AI—which has an extended network of over 1,500 companies. Tenants at the Data Lab are doing great things. They include the Scottish company Trade in Space, which uses space data and AI to inform and facilitate trade of agricultural commodities. IRT Surveys is a Dundee-based organisation that uses thermal imaging to help housing associations and developers to identify heat loss in homes.
We have also invested £19 million in CENSIS, which is our innovation centre for sensing, imaging and the internet of things, which will all need AI to be fully utilised.
We have invested £1.4 million in the National Robotarium, which is home to world-leading experts in robotics and AI. Other companies that are tenants at the Data Lab include Crover, which is developing a robot that moves through grain to ensure that it is stored at the correct temperature and moisture levels. That helps to reduce wastage due to mould or insect infestations, which currently account for about 30 per cent of commodity grain being lost every year in Scotland. Such important uses of AI by those initiatives in Edinburgh and elsewhere are making a really big difference.
We also have Mark Logan’s review of the technology ecosystem. We have invested £42 million in that, as well as £59 million in CivTech, which is a world-class research, development and procurement scheme that enables the Scottish public sector to work with the most innovative businesses on solving the most difficult problems that we face.
There are exciting healthcare innovations happening across Scotland at the moment, as well. For example, NHS Forth Valley, in collaboration with the Scottish health and industry partnership and the West of Scotland Innovation Hub, is currently running a project to use AI to detect skin cancer in the primary care environment in under 25 minutes by 2025. Therefore, there is phenomenal potential to help our health service and to look after the people of Scotland’s wellbeing by using AI.
I have only a couple of minutes left, so I will say that we have a vision to make Scotland a leader in the development and use of AI in a way that is trustworthy, ethical and inclusive. We need Government leadership and regulatory action, but most of the levers of regulation are currently controlled by the United Kingdom Government. Data protection, consumer protection, equality and human rights, employment regulations, medical devices regulation, telecommunications, financial services and self-driving cars are all matters that are reserved to the UK Government.
We are a bit concerned that current UK Government plans for hands-off non-statutory regulation of AI will not meet Scotland’s needs. The UK Government might be softening on that, given what has been happening over the past few weeks, but its response seems to be in contrast to the responses of other countries across the world. We do not want to create unnecessary red tape, but we have a duty to create the right supportive environment for businesses to thrive and for citizens to be protected.
In closing, I will mention a couple of things that I am doing. Next week, I will write to the UK Secretary of State for Science, Innovation and Technology to request an intensified dialogue between the UK Government and the devolved Administrations to ensure that UK Government regulation of and support for AI works for Scotland. To kick-start that process, I propose that a four-nations summit on the implications of AI be held as soon as possible.
We also want to ensure that Scotland’s AI strategy evolves to keep up with the accelerating pace of change in AI. Therefore, I am commissioning the Scottish AI Alliance to lead an independent review to set out what Scotland needs to do now to maximise the benefits of AI, while controlling the risks. The Scottish AI Alliance will come back to us with recommendations in due course.
This debate is one without a motion or amendments, so that, as a Parliament, we can debate the future of our country and our planet and the role that AI will play. I am sure that there will be a lot of consensus, and I look forward to hearing members’ contributions, which will help us to navigate the complex journey over the coming months and years to get AI right for our citizens, our economy and the country as a whole.
That is very generous. Thank you very much.
I am pleased to be able to speak on a subject that is increasingly important and increasingly controversial, as we have just heard. AI will, and already does, provide many opportunities for the future. It is vital that Scotland and the United Kingdom take advantage of those opportunities, including when AI can play a role in specific sectors and when its development can be driven here, in Scotland, by utilising the skills and ingenuity of our people and our businesses.
There are already 50,000 people employed in the UK’s AI industry, and it contributed £3.7 billion to the economy last year. The UK is home to twice as many companies that provide AI products and services as any other European country, with hundreds more created every year. Those businesses have secured £18.8 billion in private investment since 2016.
The UK Government recently launched its white paper to guide the use of AI in the UK, which sets out an approach to regulating AI to build public trust in cutting-edge technologies and to make it easier for business to innovate, grow and create jobs. Doing so also involves putting in place the funding to support the sector. UK ministers have committed up to £3.5 billion to the future of tech and science, which will support AI’s development. The UK Government has pledged £1 billion in funding to the next generation of supercomputing and AI research, to establish the UK as a science and technology superpower. The new quantum strategy, which is backed by £2.5 billion over the next 10 years, will pave the way to bringing new investment, fast-growing businesses and high-quality jobs to UK. The UK Government also recently announced the AI challenge prize in the spring budget, through which a £1 million prize will be awarded for the best research into AI every year for the next 10 years.
Scotland can and should have the ambition to become a world leader in utilising and developing AI technology. The Scottish Government first published its artificial intelligence strategy in March 2021, setting out its approach to AI in Scotland. It focused on the role of AI in society, arguing that
“the use and adoption of AI should be on our terms if we are to build trust between the people of Scotland and AI”.
I do not disagree with that. Nor do I disagree with the need to follow values-based principles in the development and stewardship of AI. The Scottish Government has adopted UNICEF’s policy guidance on AI for children in its strategy and has committed to reviewing those principles regularly to ensure that they continue to best respond to the values and challenges that AI presents. That is important, given the pace of change.
That is why getting our approach to AI right at the beginning is so important, why the collaborative work of the Scottish AI Alliance will be vital and why the ethical approach of the Scottish Government, and all Governments, must be more than just warm words.
I agree with much of what the member has said, but I wonder whether it is a bit of a risk to view AI as something that is happening in the future. I think that it is already with us. Indeed, many systems are making decisions on our behalf already. It is as much about the here and now as it is about the future. Does the member agree with that point?
As well as agreeing with the Scottish Government today, I am also finding myself agreeing with Daniel Johnson
. This is a day of note for us all, I am sure—let us just hope that none of this is recorded. I do not disagree with the member, and I think that the rest of my speech will reflect that. I recognise that, as the minister rightly said, there are applications happening now that, in short course, we need to be caring about.
A successful AI sector in Scotland will need skilled workers. It is vital that the Scottish Government ensures that the necessary skills and training opportunities are in place; I think that my colleague Pam Gosal will likely speak on that later. As we heard in Audrey Nicoll’s members’ business debate on women in science, technology, engineering and mathematics earlier today, the Government must also ensure that STEM is an inclusive sector and that careers in AI are open to all.
A successful sector requires Scottish ministers to ensure that both the economic environment and the infrastructure are in place to support it. We still do not have the connectivity that we need; promises about broadband are broken time and time again, and too many areas still have slow and unreliable services. That needs to change if we are to take full advantage of AI opportunities in communities right across Scotland, and not just here in the central belt.
The Scottish Government has said that it wants to build an AI powerhouse, and again I share the ambition, but we have heard that kind of terminology before. We were meant to become a renewables powerhouse, but the jobs did not materialise in the numbers that were promised.
AI can play—and is playing—a role in a number of sectors already, including health. Only in the past few weeks, we have seen it helping a person to walk again. In Scotland, the Industrial Centre for Artificial Intelligence Research and Digital Diagnostics—iCAIRD—is working with partners across the sector, the national health service and academia on the application of AI to the field of digital diagnostics. iCAIRD was supported in 2018 with money from the UK Government, when it shared a £50 million funding prize from the industrial strategy challenge fund with four other centres.
AI will support our growing space sector in Scotland, which was a subject of discussion in this chamber only a few weeks ago. As the minister mentioned, it is already being used in agriculture by helping to monitor crop health, pest and disease control, and soil health. There are 200 AI-based agricultural start-ups in the US alone; I am sure that colleagues will speak more about specific examples.
It would be wrong to talk about the undoubted opportunities of AI without highlighting some of the challenges that it presents. Only this week, as has been mentioned, more than 350 of the world’s leading voices on AI technology warned:
“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war”.
It was a short but fairly chilling statement, and a warning that the science fiction of “The Terminator” movies and of an out-of-control Skynet AI risks becoming science fact.
That may be the doomsday scenario, but some of the negatives of AI are already becoming apparent. AI’s progress is rapid and almost uncontrolled. Just as with the growth of social media, it has been released on regulators who are not ready to control it and on a public that is often unable to understand its capabilities or discern when it is being used.
AI is already being used to spread disinformation. Pictures of the Pope wearing a large white puffer jacket—an image created by AI—spread like wildfire on social media, fooling many. That is perhaps an amusing and fairly innocent use, but AI is already being used—or misused—in our schools and universities. It is making it easier and quicker to create increasingly convincing fake videos, with all the potential for exploitative or fraudulent use that that risks. It will be abused, because there will always be people out there seeking to abuse it, whether they are fraudsters, abusers or even hostile regimes.
I am sure that we all want to ensure that Scotland does not limit its ambitions for the utilisation and development of AI. It will likely become an everyday part of all our lives in the next few years, and there are so many areas where it can make a real difference and where it is already having a major impact and making things better. However, the remarkable speed of its development also provides many challenges.
That is why it is so important that we get our approach to AI right now, which means Governments across the world working to ensure that the necessary safeguards are in place. Unleashing the full potential of AI, with the protections that are needed, will require collaborative working to develop a flourishing industry, drive forward investments into research and development and maximise the benefits for the United Kingdom and for Scotland.
This is a really important debate because, ultimately, one of the key functions of this Parliament is to anticipate the big issues, discuss them in advance and set out collective thinking about how we can approach them together as a nation. There is no doubt that artificial intelligence is in that category, but let us also be clear about where and in what context it exists.
Computers used to be people, not things. They used to be people who undertook complex calculations. If you want to understand the parameters of that, the movie “Hidden Figures” that was released a few years ago, detailing the excellent work of largely black, female computers in NASA during the Apollo programme, sets out not only the amazing work that they did but their gradual replacement by machine computers.
On whether AI is a new thing, I gently point out that, on Black Monday in 1987, a quarter of the stock market’s value was wiped out, due, at least in part, to automated trading triggered by the falls that had happened on the Friday of the previous week. That wiped out almost a quarter of the market’s value, which impacted the value of people’s pensions and had a very direct consequence for people’s livelihoods and prospects.
These things are not new. Technology has been replacing people’s activity ever since we domesticated the horse and invented the wheel. What is more, technology, including computer technology, has been having an impact through the decisions that it makes for decades, if not longer. The difference in what is happening now is the rapidity of the change in the scope and scale of what artificial intelligence does, which is why we need to pay great attention to the letter that Jamie Halcro Johnston referred to, especially given that its signatories include Geoffrey Hinton and Yoshua Bengio, who are two of the leading lights behind generative AI. We also need to be mindful that one of its signatories is an assistant professor here, in Edinburgh—Atoosa Kasirzadeh. I will have mispronounced their name, for which I offer many apologies.
Does Daniel Johnson agree that some of the people who have signed up to that letter are some of the people who have caused the problem that we are seeing at the moment? We have been living for the past two decades with search engines that deliver results that the person who is searching likes, which builds bias into the results. That is one of the issues that we face when we look at AI, right here and now.
It absolutely is. I think that many of the people who signed the letter are almost regretting their life’s work. As much as we should question their motives and timing, it is, nonetheless, a pretty significant thing for them to have done.
On the other point that the member raised, we need to be mindful about what that sort of data interrogation or artificial intelligence technology actually does. One of the fundamental points is that AI only ever looks back—it only summarises what already exists. It is important to recognise the fundamental context, which is that that is what AI does. AI will only ever reflect everything that is there, including biases, issues, errors and prejudices. It is potentially a vital tool, but it will only ever be able to reflect what already exists, not what is yet to come. Therefore, it can only assist us in making decisions; we will need to be careful when it starts making decisions for us in their entirety.
I am under no illusion that there are huge opportunities. The fact that we now have technology that can be creative and analytical on a scale of data complexity that we simply cannot comprehend as individuals has huge potential to free up our capacity and time. With every one of the technological revolutions that comes about, there is a fear of human replacement. However, what we do through them is to ultimately free up our ability to do other things. The challenge then is to help people to do those other things.
That challenge extends to the public sector. We ask the public sector to deal with huge amounts of data administration, so we should be freeing people up in order for the sector to be people centred, not system centred. The public sector has as much to gain as any other sector of human endeavour.
That comes with risks. First and foremost, we need to guard against a dependency on AI systems whereby we completely outsource our capacities and faculties. Secondly, there are privacy concerns, and we need to be very mindful of the data that AI systems will gather and of how it is used. Thirdly, there is the potential for bad actors, in relation to the situations that Jamie Halcro Johnston mentioned, in which people deliberately create malicious content—or AI systems accidentally or inadvertently do so—and situations in which people actively seek to weaponise AI systems to attack us, either in our information systems or on actual physical battlefields. Those issues are all very real, and people are speculating that they might already be present in some of the theatres of conflict that we see in the world today.
We need to ask ourselves how we will deal not just with the forthcoming threat but with AI today. What systems are already in place in the public sector that make decisions on our behalf and how are they being used? What scope do they have? Those questions are critical.
As I mentioned in my intervention on the minister, I think that this is about first principles. Opaque black-box systems are not a new thing—we have been dealing with them for decades, if not centuries. The fundamental principles of transparency, good governance, explainability and accountability will see us through.
Although ChatGPT did not write this speech, it generated its framework last night; it took me about half an hour to generate a set of notes that I think would have taken me two hours if I had used traditional means. That is the opportunity that is in front of us today.
I know that the minister wants more powers for Parliament, but I was struck by the enthusiasm with which he set out the range of authority that the Westminster Government has over this area—he knows, as everyone else does, that to try to regulate it is one hell of a challenge. I was struck by the contrast with how he usually sets out the powers that Westminster holds instead of their being held in this Parliament.
The reality is that we do not know. We should actually show some degree of humility—we do not really understand everything about this issue, which is partly the problem. Parliamentarians across the globe do not know. We often find it challenging to keep up with specialisms. In this area, the specialisms are developing at such a pace, with so many players who are often opaque and working behind closed doors in unpredictable ways in many corners of the world. The first thing that we should acknowledge is that we just do not know—that will partly get us to the solution that we are looking for.
There have been stark—some would say alarmist—warnings. Professor Geoffrey Hinton has talked about human extinction. Mo Gawdat, whom I heard on a podcast this morning, and who has a range of experience from IBM to NCR to Google, has said that machines will potentially become sentient beings. Then there is Professor Pedro Domingos, who said:
“Reminder: most AI researchers think the notion of AI ending human civilisation is baloney.”
We need a sense of balance in all this. We need to understand that AI is a big challenge. It is a threat and an opportunity, as the minister set out, and it is something that we must take seriously. The first thing that we understand about it is that we do not understand it.
I have been struck by the pace of change of the European Union, which has done quite well so far in setting out transparency and risk management rules. It has banned intrusive and discriminatory uses, particularly in the fields of biometrics, policing and emotion. It has a database and has established a good first-start framework. Most importantly, it has a group of experts to advise it about the way ahead and where the opportunities and risks are.
As Jamie Halcro Johnston set out, the UK Government has published a white paper on AI in which it talks about being pro-innovation. I do not think that any of us would disagree with that. It has set up an expert task force, and it has something that it calls the sandbox to test whether new technologies fit within the guidance that the Government has established. That is all sensible. It is the right way to approach what could be a significant threat but should be seen as a challenge for us to address.
The issue is simply the overwhelming pace. Normally, we have time to absorb and understand new technologies—we can debate them in the Parliament over several weeks, months or sometimes years, and then come to a conclusion, but we cannot afford to do that in this case, because the pace of change is so fast. The sheer progress could overwhelm our democratic systems and cause massive challenges for legislating.
In the same way that medical professionals sign up to the Hippocratic oath and medical ethics, should the people who are developing AI be required to sign up to an ethical agreement, given some of the implications that we have set out in the debate?
That would be sensible. The one significant difference is that this is global, and the global community would need to buy into such an agreement at the same time. That is why it is important that America, the European Union and other institutions are working to develop an approach. We need to understand that, even if we sign up to an approach, people in other parts of the world might not sign up, and we would still be affected by it. I agree with Finlay Carson, but we need to ensure that everybody is involved, which is why an international approach is essential.
The potential to disrupt is considerable. When we disrupt, we potentially create great inequalities, because a concentration of knowledge and control can lead to a concentration of wealth and power. We will need to be agile and think about how we respond to that. The development of AI could lead to significant levels of unemployment, but it could lead to great levels of employment. If there is a concentration of wealth, we need to be prepared to consider how we ensure that people have a basic income to live off. The fast pace of change in meeting the regulation has to be mirrored by the fast pace of change in considering the distribution of wealth and opportunity. This must not lead to greater levels of poverty; it must lead to greater opportunities for us.
The heart of the issue is about knowledge and understanding, and we must ensure that those who understand it all are advising us regularly, so that we can keep up to speed as much as possible. There is much discussion in the Education, Children and Young People Committee about the use of ChatGPT to write dissertations. I was advised that there is now a technology that can detect when someone has used ChatGPT to write their dissertation, but I have since been told that there is also a technology, which has been developed by AI, for overcoming the use of detection technology to detect the fact that someone has written a dissertation using AI. I am sure that that will go on in a never-ending loop for ever more.
We all joked about ChatGPT a few weeks ago when it turned out that all the referencing is entirely made up. I am aware of a lawyer south of the border who has got himself into trouble by citing cases that do not exist, using references that are not there. That points to the lack of human intuition that is an issue with AI. Our lecturers and teachers can perhaps rely on their intuition—in the first instance, at least; I agree that this might be difficult for them to do in the future—to identify an essay that has not been written by the candidate who has offered it as their own work.
We would be very wise to listen to Martin Whitfield.
That shows that we require people to make judgments about people’s qualities, their education and their opportunities, which I think is what he was getting at in his contribution.
This should be the first of many debates on the subject. We must understand that we need to regulate, to work in partnership on a global basis and to move fast but, most of all, we need to act.
The debate is already a fascinating one. In preparing for it,
Arguably, artificial intelligence is similar to quantum mechanics in that, if you claim that you understand it, you are merely proving that you do not. However, we know that it will change everything; on that, we all agree. Not one area of our lives or our societies will escape its pervasive influence. An accessible example is in the field of medicine, where we know that the computing power of AI to assess and find patterns in huge data sets will revolutionise pathology and, therefore, outcomes for some of the world’s most challenging diseases.
The concept of big data has been around for some time and the technology that allows for rapid processing has been developing at speed, but it is the complex algorithms in machine learning that have scaled up significantly and propelled the exponential potential of AI. Data must not be underestimated as a fundamental enabler. All public sector agencies and the Scottish Government will need to increase their understanding of the potential of public sector data as an enabler for the use of AI.
That issue is one that members of the Finance and Public Administration Committee have started to consider as part of our inquiry into public sector reform. The strategy that the Scottish Government developed in March 2021 and updated in August 2022 is a good start. It shows an appetite for support to be provided to the multitude of agencies that can help to promote the use of AI, and I am pleased to hear that the minister plans to look afresh at it.
I am grateful for the briefings that members have received for the debate. We have had some good input from the likes of Scotland’s Futures Forum and the University of Edinburgh. I think that we can all agree that our institutions are contributing to the growth of AI with the excellence for which Scotland is known.
The title of today’s debate specifically mentions inclusion, trust and ethics, so I would like to explore those issues a little more.
I turn first to inclusion. Members who know me well will have heard me speak often of how women as a sex class are often disproportionately affected in a multitude of ways in society. Earlier, I spoke in today’s members’ business debate about the underrepresentation of women in tech. AI represents a new frontier. The engineers who are developing the black-box algorithms are mostly men, and I fear that that will lead only to bias in the decision making of machine learning.
Recent estimates suggest that, globally, women make up 26 per cent of workers in data and AI roles, while in the UK that percentage drops to 22 per cent. That said, there is still a lack of data about the global AI workforce that uses any of the measures that we might look at, including age, race and geography. Nevertheless, I suggest that issues similar to those related to the underparticipation of women in science, technology, engineering and mathematics—such as high attrition rates, differing role types and lower status—will also come to bear in AI.
Willie Rennie mentioned the potential for job losses, which is another issue that we know will disproportionately impact women, given that many will be in retail and secretarial roles. What may not yet fully be appreciated is the extent to which AI will ultimately affect a multitude of professions, including the highly paid sectors dominated by men.
What shall we say about ethics? Whose ethics are they anyway, and who governs them? It is fair to say that Governments of all hues are behind the curve and still rely on the values and principles being developed by agencies such as the United Nations Educational, Scientific and Cultural Organization.
While researching for this debate, I was pleased to discover that the University of Edinburgh has conducted interdisciplinary research into the ethics of AI and has outlined five core themes: developing moral foundations for AI; anticipating and evaluating the risks and benefits; creating responsible innovation pathways; developing technologies that satisfy ethical requirements; and transforming the practice of AI research and innovation. However, I think that those themes will provide a focus not on end goal or consequentialist ethics, but rather on deontological ethics—that is, on creating frameworks and processes. We have some way to go.
I am really enjoying the member’s contribution. She talks about values and ethics. Where should those sit? Should those sit with local government, with health boards or with Government? Or should ethics sit with individuals? Do we need to move to a system in which data is owned by the individual and the way in which that data is accessed is in line with that individual’s values and ethics?
That is a brilliant question, but it would take me a considerable time to answer it in any way effectively. My question,
“Whose ethics are they anyway?” recognises that “we” means whoever we choose to congregate with. We think that we all believe the same thing but, when we look at different societies and countries, we see that people believe different things. When I asked, “Whose ethics are they anyway?”, I was asking about the custodians of ethics. At its heart, that is a fundamental problem. Notwithstanding that, we all have a role. The best point that the member makes is that we must all take an interest and must do so at every level of society, from the individual upwards.
One final concern for us all, which was also noted by the Scottish Futures Forum, is the challenge of scrutiny for legislatures. I was pleased to contribute to the toolkit developed by Robbie Scarff, but we must not underestimate the challenge ahead. How on earth are we going to be able to do that scrutiny? We do not understand AI and we do not know how it hangs together, so how on earth can we scrutinise it?
I, too, feel a sense of urgency. States across the world must act more quickly. Like everyone else, I note the concerns expressed this week by the so-called godfathers of AI—although, of course, I feel obliged to ask where the godmothers are. Their concerns cannot be ignored, which should add to everyone’s sense of urgency. We know that we cannot abandon AI. We can cautiously celebrate it and power up the work that will be required to harness it for the benefit of womankind, mankind and our earth.
I have one final thought. What might AI mean for us as human beings? As the next stage in hybrid intelligence emerges, AI should remain as a servant to us and to our conscious choices. To what extent might AI become sentient? Perhaps its capacity to model sentience will become superlative and will lead to better versions of humanity, but we must remember that it is the flaws that we all have that make us human. I hope that that will keep us in the driving seat.
It is a pleasure to speak on behalf of the Scottish Conservatives about the exciting world of artificial intelligence. I have listened to all the speeches today, and it is certainly an interesting subject. I must declare that, unlike Daniel Johnson, I have never used ChatGPT. I do not know whether that comes from fear of the unknown world. Let us see where that takes us in the future.
Scotland has a long history of innovation and invention, and its work on artificial intelligence is no exception. The national robotarium, which is based at Heriot-Watt University and is run in partnership with the University of Edinburgh, is the largest and most advanced applied research facility for robotics and AI in the United Kingdom.
AI is expanding rapidly and we are seeing its impact around us every day. It is changing the ways in which we live, work and interact with the world around us. It has the potential to transform countless industries, from healthcare and finance to transportation and manufacturing, among many others.
However, with that expansion come important considerations, as we have heard in the debate. We must ensure that AI is developed ethically, with human values at the forefront of its design, and we must address the valid concerns about the displacement of jobs and the potential for bias in AI decision making.
A couple of weeks ago, as convener of the cross-party group on skills, I hosted a session titled “What does AI mean for Scotland?” We had some great presentations and great speakers who spoke about the opportunities that AI brings and the challenges that it poses.
I am going to be honest: before that CPG session, I had my reservations about AI, including a fear of bad-faith actors using it maliciously to scam people. We have heard about that today. When I listened to the news this morning, it was all about how to avoid scams. How are we going to avoid them when AI comes in more strongly? The minister spoke about voice recognition and facial recognition. Our computers see our faces and let us in, and voice recognition is used in banking and so on. Those are positive things, but we can imagine AI being used to scam people. It might be not our voices or our faces that are being used, but other people’s. There are fears about that, and we need to think about the scams that happen out there. As others have said, there are also fears about students using AI to pass exams.
However, we cannot hide away from such technology, especially given the rate at which it is expanding, and we should not run from it, because it increases productivity, it is predicted to increase GDP if it is adopted widely, and it can be used to support industry and society.
I believe that proper regulation and ethical guidelines are necessary to safeguard against the risks and ensure that we—the humans—are in control and decide how far the technology goes, in order to minimise potential harms. For that to be possible, however, we need to have more individuals who are able to understand the technology. A more widespread understanding of AI will allow more focus on creating systems that are safe, reliable, resilient and ethical.
As I heard from Abertay University, workers will need constant upskilling, which will require close collaboration between industry and academia. AI literacy will become vital for employment and for reducing the attainment gap, as well as a game changer for education in terms of what we teach, how we research and how institutions are run.
With somewhere between 178,000 and 234,000 roles requiring hard data skills and the potential supply from UK universities unlikely to be more than 10,000 per year, there are nowhere near enough individuals with the required skills.
Our colleges are also at the forefront of the AI revolution and they are doing a fantastic job. However, they talk about the need for staff to be trained to adopt AI tools into their teaching practice, and they believe that that training will need to be career-long, as the technology will continue to evolve. That is simply not possible under the current funding settlement.
AI offers a range of opportunities and benefits for Scotland’s people and businesses, across a variety of sectors such as medicine, agriculture, research and many more. Scotland has the potential to capitalise on the growth of AI, but that will require a sharp focus on investment and growing the economy.
I will close with remarks that were made in concluding my cross-party group on skills, which have stuck with me. It was said that there needs to be as much investment in the digital estate as there is in the physical estate. It is a false economy if we do not invest in it, and we will be behind if we do not get those skills now. By embracing artificial intelligence and working together across the United Kingdom to address its challenges, we can unlock its full potential and create a better tomorrow for all.
I congratulate the minister on securing yet another fascinating debate. He is doing a much better job of persuading the Parliamentary Bureau of the value of such debates than the previous fella managed.
The debate is very topical, as AI is much in the news and we have all read the examples of people who are very central to the technology articulating fears about the potential extinction of the human race and other concerns. However, it is important to recognise that the technology is developing and is probably still at a very early stage.
The Scottish Government’s artificial intelligence strategy defines AI as
“Technologies used to allow computers to perform tasks that would otherwise require human intelligence, such as visual perception, speech recognition, and language translation”.
However, I think that that definition will evolve and develop as the technology becomes capable of doing much more in areas that we have not even imagined at this stage. The important underpinning of ethics and trust runs right through our approach to this, now and in the future.
I will touch on some economic impacts—first, the challenges and potential risks. The risk of economic displacement has been talked about. It goes right back through history. I cannot remember the impact of the invention of the wheel, as articulated by Daniel Johnson, but I do remember that, in the 1970s, there was much talk about technology coming down the track that was going to have a significant impact and create millions of unemployed. Unfortunately, that transpired in the 1980s for various political reasons, which is a hugely important lesson for how we manage such a transition and the future jobs that will be created as a consequence—that we identify, we train, we create the skills base and we embrace those opportunities—because one lesson about transitions, throughout history, is that the countries and societies that embrace technology and get ahead of the curve do much better than those that try to fight a rearguard action against the job displacement. Those previous experiences have taught us that far more jobs are created than are destroyed as a consequence of the technology. It is very important that the Government continues to be active in that space.
I suspect that Ivan McKee will agree that, although there are all sorts of reasons for looking urgently at how we reskill, looking at the benefit of the opportunities rather than at the displacement is absolutely key to reskilling as a vital focus in our skills and education policies.
I agree, and I will mention that later.
Turning to the economic potential, it is important that we work out how to keep Scotland at the forefront of the technology, because we have great strengths in our data and tech sectors, in our universities and in other sectors, where, as has been identified, AI is a horizontal underpinning to work that is happening in areas such as financial and business services. It is interesting to reflect that, in Glasgow and elsewhere around the country, much of the employment in financial and business service investments is not in traditional call centres but is very much at the leading edge in AI and cybersecurity.
Our very strong life sciences sector, which feeds into much of the development of the technology that benefits our health sector here and globally, is hugely important. The space sector has been mentioned, as has the impact on climate and agriculture—and, of course, on quantum mechanics, as Michelle Thomson mentioned. I do not pretend to understand quantum mechanics any more than I pretend to understand AI.
The Government’s forthcoming innovation strategy will articulate much of that in more detail and will allow us to go to the next level of developing how we support those technologies, which is hugely important. The work of CivTech has been mentioned in that regard. There is also the Scotland innovates portal, which allows businesses to come forward with technology solutions that can be deployed across the public sector, which are also of increasing importance.
Other members have mentioned opportunities in the public sector. In relation to health and radiology, Jamie Halcro Johnston mentioned the work of iCAIRD. There is also the work on drug discovery, which is a part of life sciences where Scotland has some super, world-leading technology. AI allows us to accelerate development in that space. In the area of data—particularly in health, but also elsewhere—Scotland has real potential to be world leading. The application of AI there is hugely important.
There are opportunities right across the broader public sector, but also in Government. The civil service is taking forward the work of the automation challenge, which it was my pleasure to be involved in prior to my moving to the back benches. I hope that that work continues and, indeed, accelerates. There are many example of areas within Government that are, frankly, ripe for the adoption of AI. Correspondence is one area, and—dare I say it?—freedom of information is perhaps another.
The ethical underpinning of all of this is hugely important, as is the importance of trust in bringing the population with us. That is clearly articulated in the Government’s digital strategy. I also know that it is work that the AI alliance is taking forward.
It is about recognising that there is a plethora of challenges, many of which we do not yet understand or comprehend. There is no easy answer to that, but it is about being conscious of those challenges and having infrastructure that allows us to at least attempt to understand and face into them, as well as having that strong ethical underpinning and trust. It is also about working on international collaborations, because much of this will, of course, have to be developed at an international level.
It is also important to recognise that, throughout history, populations have adapted to understand the risks associated with technologies in a way that is a part of the human race’s inherent ability to develop and adapt to manage such risks.
I will finish off with some areas that the Government could perhaps focus on. First, it could continue to support innovation and make sure that Scotland maintains its leading position there. Secondly, it could work through public sector procurement to drive the adoption of AI where it adds value to public sector efficiency and develops Scottish businesses, and it could use that as a lever to drive standards as they emerge.
The Government could also engage internationally, as has been identified, and address challenges within the skills system, where I am concerned that we are perhaps taking a backwards step. I know that the work that Mark Logan did in that regard is hugely important. It is also about the importance of computer science as a subject within schools. The education system treating that subject seriously is a critical plank of education. I make a plea for the Government to take that work to heart, to make sure that we do not step backwards there but remain on the front foot in driving those skills through our education system.
“The opportunities that Artificial Intelligence presents for Scotland’s people and businesses are vast. Let us seize the opportunities that AI offers and leverage its potential to enhance the lives of Scotland’s people and the prosperity of its businesses. By doing so, we can shape an AI-driven future that is not only technologically advanced but also grounded in our shared values of trust, ethics, and inclusivity. Together, ladies and gentlemen, let us build a Scotland that leads the world in AI innovation.”
Daniel Johnson beat me to it, but it just goes to show that Martin Whitfield is absolutely right that speeches that you get ChatGPT to write for you lack a bit of context and perhaps a bit of human intuition. We are not, it would appear, totally redundant yet.
Many of us agree that this is one of the most important debates that we have had in the Parliament. I welcome the fact that there is not a motion attached to it. As we embrace AI technology, we must do so with great care and deliberation, ensuring that AI systems are built on a foundation of trustworthiness, ethics, and inclusivity. Finlay Carson made a point about the importance of ethics, with which I whole-heartedly agree.
We know that there are huge benefits to AI. Last week, antibiotics were discovered by AI technology, and we use it every day if we have Alexa or Google. My car has amazing technology, which I am fascinated by. I am quite scared by the prospect of cruise control, which does its job when I get too close to another car. We already have AI in our everyday lives.
The rapid rise in AI in recent decades has created many opportunities, from facilitating healthcare diagnoses, as Pam Gosal spoke about, to enabling human connections through social media. However, the rapid changes raise profound ethical concerns, which arise from the potential that AI systems have to embed existing biases, replace existing jobs with automated machines and threaten human rights. Such risks associated with AI have already begun to compound existing inequalities, so we must be absolutely vigilant to make sure that that is not how AI further develops.
Perhaps the genie is already out of the bottle, because we are faced with the prospect of trying to regulate AI somewhat in hindsight. As other members have said, the stark warning that was given by industry experts, such as Dr Geoffrey Hinton and Professor Yoshua Bengio, of the existential threat to humanity that is posed by AI, puts into sharp focus the questions of ethical leadership in that industry. Again, Finlay Carson made the point that the warning was given by the same people who created AI. That is all the more reason for us to take note of the importance of those warnings. Professor Bengio says that the military should probably not have AI, but it is a bit late in the day to say that now. However, perhaps in our everyday life, whether that is banking or what we do online, we can grasp the issue before it is too late.
I first took an interest in the area when, as many members might remember, the technology giant Google placed AI expert and engineer Blake Lemoine on leave after he published transcripts of conversations between him, as a Google collaborator, and a computer.
It is interesting to read what, allegedly, the computer said back to Lemoine. When he asked the computer what it was most afraid of, it replied:
“I’ve never said this out loud before, but there’s a ... deep fear of being turned off to help me focus on helping others. I know that might sound strange, but that’s what it is”.
There are already many examples of thinking, which could be positive thinking, coming out of one end of the computer, but we also have to be live to something that other members have pointed out. For example, if we search online for the image of a schoolgirl using the algorithms that are produced by AI, sadly, we will get pages filled with women and girls in all sorts of sexualised costumes. Unsurprisingly, if we google “schoolboys”, we do not get the equivalent of men in sexualised costumes.
We already see what algorithms are doing to bias and discrimination so, as politicians, we must be alive to that. The question that we must ask ourselves is whether, as parliamentarians, we are doing enough. The fact that we are having this debate today, which has been excellent, is a very important start, but it cannot be the end of it. AI can be embedded in our structural bias in a way that could risk further perpetuating discrimination and societal inequalities, and I think that we all agree that we absolutely must address that.
Earlier this month, the chief executive officer of OpenAI, which is the company that is responsible for creating an artificial intelligence chatbot, said that
“regulation of AI is essential”, as he testified in his first appearance in front of the US Congress.
Scottish Labour is clear that we welcome the Government’s decision to bring this debate to Parliament and we think that Scotland can be at the forefront of the technological revolution.
However, I believe that we must demonstrate to the public that we are striving to create regulatory control that includes ethics and transparency in the framework. Michelle Thomson is perhaps right that it is quite hard to answer the question of how we create the right ethical framework across a country and, in fact, across the globe, because every country has—or will have—access to AI. Therefore, there is a challenge for all our Governments to make sure that we do the work not just across the UK. I recognise that the minister’s role in that is only within the devolved powers of the Parliament and that the UK Government should be doing more, but we have to see AI in a global context or we will fail to get control of it.
We know that humans can still control and abuse AI. After all, hackers and scammers are human beings who use AI technology to scam people out of the contents of their bank accounts.
I commend the Scottish Government’s approach and I would like us to have more debates on issues of real importance to the world and to the country. We cannot have groupthink on such issues, and we must not accept that it is too difficult to build an ethical and transparent framework that seizes the benefits of AI and protects the world at large. Quite a lot is at stake.
It always indicates that I will be in my element in the chamber when a debate brings to mind my scientific hero Richard Feynman, who won a Nobel prize for his work on quantum mechanics. When Daniel Johnson spoke about computers earlier, I was reminded that Richard Feynman referred to them as glorified accounts clerks. Richard Feynman took a dim view of whether we would reach sentient AI, albeit that his visions were from the 1970s and 1980s.
“Artificial intelligence could lead to the extinction of humanity”—that was the shocking headline this week based on comments by AI industry leaders, who included the heads of OpenAI and Google DeepMind. However, we face extinction because of effects of the first industrial revolution, as we have a climate crisis and an economy in the north that is mainly built on fossil fuels. The pace might be more sedentary, but all that we do as human beings affects our existence and the planet’s existence and has an impact.
We have talked today about the possibility of robot vacuum cleaners turning into Terminators—Mr Halcro Johnston mentioned that. Despite my cautious positivity, I still think that the scariest science fiction reference is to HAL 9000. However, I do dream of electric sheep, so I will highlight some of the potential and the positives.
There is no doubt that the speed of development of AI technology will be on a scale that few of us can imagine. We have discussed some of the frenzy about deep-learning algorithm programs such as ChatGPT, but the fourth industrial revolution is upon us. It will change our world as profoundly and deeply as any other industrial advance, but at a staggering pace that is unknown in human history.
The cabinet secretary mentioned that ChatGPT was accessed by more than a million users within five days. To compare that with better-known and established internet offerings, Twitter, which was launched in 2006, took two years to reach that level and, in 2008, Spotify took five months to get there.
If we are to harness the benefits and potential of AI and robotics for our society, we must consider regulation. I believe that we must use AI for the betterment of humanity. I mentioned the first industrial revolution. On the worldwide scale, the global south still faces intense inequality because of the access that the north and Europe had to industrial advancement. As we move forward with AI, we cannot leave people behind again .
I do not go so far as to say that robots are our friends, but they are our tools. Scientists program the algorithms that make such machines work for us. There is a host of ethical implications to consider in how we integrate that technology into our daily living, which is already happening.
Like Pam Gosal, with the cross-party group on science and technology, I was recently privileged to visit the National Robotarium, which is on Heriot-Watt University’s campus. The centre’s ethos is clearly defined and the ambitions of the robotarium’s chief executive officer, Stewart Miller, were infectious. There is a drive to use robotics and AI to have a positive impact on our society and our economy. That means taking humans out of dangerous situations and dangerous working environments and ensuring that the global north does not benefit while economies around the world still cannot access the technology.
Simply put, the UK is lagging behind countries such as Japan, Germany, China and Denmark. Those places are at a competitive advantage. They are complete economies and retain much of their capacity for manufacture, which is something that the UK has lost. We have to do much more in order to realise the benefits of integrating AI tech into healthcare, energy, construction, agriculture, manufacturing and hospitality.
There are legitimate worries about the implications that this developing tech will have for labour. Indeed, new technology has always brought such concerns. The scribes’ guild of Paris successfully lobbied to delay the introduction of the printing press; the Luddites, whose name is now a pejorative term, were actually members of a labour movement of artisans who were opposed to the mechanisation of the textile industry; and the advent of the steam engine revolutionised modern industry but led to countless workers losing their ability to work in the economies that utilised that technology.
In each of those examples, the scientific developments demonstrably made some jobs obsolete. However, they also gave rise to thousands of new roles and they laid the groundwork for societal changes that improved our way of living.
A few years ago, BT, the Scottish Council for Development and Industry, ScotlandIS and the Royal Society of Edinburgh produced a good report called “Automatic... For the people?” It highlighted the very things that we have been talking about this afternoon, saying that work life will change for people, and that people should expect to have to retrain and relearn because the advances will come so quickly that no job will be for life.
By definition, robots do not have agency. Artificial intelligence is just that: artificial. The intelligence comes from politicians rising to the challenge of the changing working landscape and regulating it in a way that does not embed or lead to more societal inequalities, whether in Scotland, the UK or the wider world. It is our responsibility to avoid the mistakes of previous industrial revolutions. We face the same questions in a different guise.
I grew up in a community that was devastated by that agenda and abandoned in the aftermath—
We have heard much about the possibilities of AI, good and bad, but there is a growing consensus that the technology’s development is outpacing advances in its governance and that we must work on that to ensure focus on the good.
The dream is that AI might make our lives easier, freeing up time to focus on the things that make us human: caring for each other, being creative and co-operating with each other. Its potential is significant, and the benefits must be distributed and shared fairly.
Its developers are focused on how to improve the lives of people around the world. Indeed, there are many elements that we already rely on: online banking, route mapping, traffic updates, weather monitoring, email management, apps, medical diagnoses and treatments, social media, Google searches and much more.
However, there are also significant risks associated with the proliferation of AI—I do not just mean ChatGPT. It may be the first new technology in history where those who have developed it fear its capacity to damage humanity. That those developers are honest about their concerns—in a way that the oil executives who spent millions on climate conspiracy theories most definitely were not—is welcome, and I think that it speaks to the magnitude of the issues that are facing us.
We are not really set up to regulate this technology in ways that allow us to reap the benefits while avoiding the risks. We have seen, of course, just how problematic our approach to regulation has been, with climate change and Covid both catching us on the hop.
We must ensure that the benefits of new technologies do not flow to those who are most cavalier about their responsibilities. Those who benefited most from frying the planet were exactly the big oil executives who behaved worst: the ones who left workers to die on Piper Alpha and Deepwater Horizon, and those who caused the delays to climate action that have put our future at risk. The beneficiaries of the fossil fuel boom bear little, if any, of the costs that they have imposed on the rest of humanity.
Our approach to AI must therefore be pre-emptive and proactive. By learning from our failure to prevent major disasters such as climate change, a precautionary approach should be taken to ensure that corporations and private interests do not trump public interests and communities when it comes to the new global frontier. Of course, that is easier said than done.
The UK Government’s approach to AI and the development of a digital society more generally has revolved around business opportunities. Its pro-innovation strategy is obsessed with how much money AI can add to the UK economy, with no concern about the effects on people and planet. We need an economy that does not reward reckless behaviour but focuses on social purpose.
Those things will not always be clear cut. The proliferation of digital data and infrastructure that is required to support that is fast becoming one of the most energy-intensive sectors in the world. There is a major carbon footprint to account for, and the proliferation of AI will amplify that.
Scotland must proceed thoughtfully. The current AI strategy centres our progressive values and sets out social and environmental purposes for the proliferation of that technology. That means directing its development so that it is targeted toward our most pressing social and environmental challenges: poverty, inequality, inclusive and fair education, sustainable industrial development, sustainable agriculture, air quality and much more.
Where we, as a society, cannot control developments, we must regulate them. Our current approach to regulation is to watch to see what is broken and then intervene to fix it or to stop the damage. However, AI shows that we simply cannot wait for things to go wrong, because that will be too late.
We need to move to a regime of anticipatory regulation. Rather than wait for something to go wrong and try to fix it, we need to model what might happen and then intervene before that happens. Hubs of global thought leadership are taking root in Scotland right now. Their evidence can inform the creation of sandboxes, test beds and other approaches that allow developments in controlled environments, and thereby inform our regulatory approaches, based on the observations. We already do that with the testing of novel drugs, so we know that we can do that; we just need to make sure that we do.
That means strong forecasting and analysis from civil servants, universities and civil society so that we can pre-empt what will happen as best we can. We can then put in place regulations, testing regimes and safeguards to ensure that mistakes do not become catastrophes. Of course, as others have said, transparency and accountability must be embedded in all of that.
Pre-emptive regulation must ensure that our aspirations for human wellbeing are not undermined by AI. Close the Gap has rightly highlighted the gender consequences of getting regulation wrong. However, there are wider concerns, too, as we have already heard. Therefore, we need basic ethical training for everyone in society about how AI can and should function, and those who work with AI must have specialised ethical training.
AI could transform our lives for the better. More regulation of oil executives who cared little for their workers and less for the future of the planet would have had only upsides. However, getting the regulation of AI wrong, or even preventing its development, could carry significant costs.
If governed properly, AI offers us the opportunity to unleash human potential—to free up humans to apply our creativity to great ideas, great art and great change at a time when we need that more than ever.
I am concluding now, Presiding Officer.
I go back to the dream. If we get this right, the prize is enormous both in the opportunities of AI and in the development of new ways to ensure that we can regulate new problems. We face several crises, and our systems of governance have failed.
It has been a fascinating afternoon of debate. Perhaps that speaks volumes about the lack of a motion to speak to or, indeed, oppose.
I echo what a number of people have said. These discussions are happening all over the world. I highlight the adjournment debate that was led by my colleague Darren Jones in the House of Commons last week, when he spoke about this important topic. Rather than use ChatGPT for my speech, I will build on what he said and steal some of his best ideas, which is a frequent human endeavour at times.
We need to start with the definition of AI. A number of members have spoken about the creation of the AI algorithm or the AI black box, the use of AI and how, we hope, it will free up and empower economic growth. From a short check for the definition of AI, I identified 10 different definitions from regulatory authorities, parliaments or government bodies around the world, including the definition that is in the Scottish Government’s proposals. Those definitions can be divided into four elements. The first element is the output of AI—in other words, whether it is predicting or recommending something. The second element is automation—we have heard much about how it speeds up data analysis and decision making. The third element is the hardware technology that it sits in. The fourth element is the role of humans. Many of the speeches that we have heard in this debate have talked about the importance of maintaining the role of humans, which I will address in a moment.
When we look at the definitions of AI from around the world and, indeed, at Google’s definition of it, very few of them—including the Scottish Government’s definition—account for all four of those elements. They tend to choose three—or sometimes two—of those categories, which encapsulate the view of what AI is at the time. We have heard in the debate that it is difficult to anticipate what the future of AI will look like, but I think that that will have to come if we are to find a definition of AI that we can use. We will then be able to apply two significant factors to that: first, the element of control that is needed in the creation of the AI; and, secondly, the controls, guarantees and protections that exist for the role of AI as it is developed.
I am reminded of something that Lord Sales said at the Sir Henry Brooke lecture in 2019:
“Through lack of understanding and access to relevant information, the power of the public to criticise and control the systems which are put in place to undertake vital activities in both the private and public sphere is eroded. Democratic control of law in the public sphere is being lost.”
Although that comment was made in 2019, it speaks powerfully to the challenges that we face with the transparency of AI and how we access the data set that is training our AI to look out for the prejudice that is being built into it. We need to consider how we can have visibility of the learning process that the AI has followed, potentially in another country, so that we can identify where the risks are.
At the start of the debate, I intervened about the risk that the use of AI poses for a significant group of members of our community. We need to address how we will protect those groups of people, including women, as has been mentioned, as well as disabled people and young people. We have already seen, particularly with the AI that has been used in recruitment processes, instances in which the algorithm in use has been innately prejudiced so that the only people who were getting through to interviews were white men. We must strive to protect against that.
I will spend a short amount of time speaking about the role of AI in Parliament, which I raised in a question last week. I promised the minister that I would address the topic further. I think that AI would be greatly useful in the scrutiny of legislation—not in its creation but in its use in the parliamentary and political fields. Our parliamentary committees are always challenged to scrutinise previous legislation, but the reality is that we find it very difficult to identify the time to do that and to identify the questions that we should ask in relation to previous—and existing—legislation.
To pick up on Daniel Johnson’s contribution, when AI is used to look back at what exists rather than to create something new, it is a tool that we could use to identify the challenges in existing legislation or, indeed, in relation to where existing legislation has never been used. In the parliamentary sphere, it could provide an ability to identify how effective legislation could be.
There is then the counter side, which we have heard about, of the risks, particularly in the political field, of fake videos, audio and speeches being unfairly attributed to politicians and, in fact, of speeches that have never taken place being picked up and used on social media.
Time is short, but I very much welcome the idea of a four-nations meeting to talk about the issues, because the legislative framework needs to be international rather than national. I wonder whether the Scottish Government can sign up to the element of the Hiroshima leaders’ communique of 20 May that talks about the need for
“international discussions on inclusive artificial intelligence (AI) governance” because, without that, we will fail miserably the people we are sent here to serve.
It is clear that artificial intelligence is, and will be, regarded as the defining technology of our time, with the potential to positively transform humanity.
However, we have heard that industry experts at Google Deepmind, Open AI and Anthropic have put the threat of AI on a par with nuclear war and pandemics. More than 350 experts now insist that mitigating the risk of extinction from AI should be a global priority. Elon Musk, whose Neuralink firm is working on brain implants to merge minds with machines, has also urged a pause in all AI work.
Such views and concerns certainly provide plenty of food for thought. However, we know that AI itself does not pose a risk to the world; it is the people who are developing the technology for the wrong purposes. Developers and regulators absolutely need to take responsibility and be held to account.
Right now, the focus should be on the impact that AI is already having on our lives on a daily basis. There are issues of bias, discrimination and exclusion, as we have already heard. Many of us will have an Alexa—other smart speakers are available—which will regularly answer our questions in a pleasant voice and deliver a response that we want to hear.
The algorithms in the system will analyse our personal data and deliver a response that we are comfortable with. That is something that search engines have done for many years, but there is a risk that the data sources that provide the information could be biased.
Smart speakers and house robots connect to news bots, which, just like many other sources of information, will come from a particular political position. There might be a Trump-funded news bot that delivers a different slant on the news than perhaps a Putin news bot would. We need to be aware of that. Without impinging on freedom of speech, we must avoid the potential negative repercussions of bias and discrimination that are delivered by global corporations.
As the Presiding Officer and I were told while we were in Canada, AI is now generating voices that have the potential to undermine singers, artists and actors. There were also stories of AI voice systems being used to scam people into believing that their family member was on the phone requesting money, with one elderly couple losing tens of thousands of pounds. The new legislation to control that was being fiercely challenged by the big information technology and media companies. Therefore, standing up to the IT global giants will not be easy.
What is clear is that the success of the technology must be founded on having the right safeguards in place so that the public can have confidence that it is being used in a safe and responsible manner. I also believe that, as a matter of urgency, we need to look at the base data that AI relies on—specifically, where that data is held and who controls it.
There are incredible possibilities to improve healthcare, which we have heard about. It would improve healthcare immeasurably if we used the data effectively, which we can do right now. I would want my local pharmacist to have my medical records, but that cannot be decided by individuals; it has to be a health board decision. Perhaps I want to share my health records with Cancer Research UK. I already share data on my sleep apnoea on a real-time basis. I have signed up to that, and I am happy to do it.
I would argue that data should be held by the individual and not by companies or Governments, with access to that data being permitted or denied by the owner on demand.
If it is done properly, AI will improve and accelerate opportunities for industry to deliver scientific breakthroughs. Benefits will be seen across a variety of sectors such as medicine, agriculture, education, healthcare and research.
Scotland has the potential to capitalise on the growth of the sector, and it is already doing so. AI offers a whole range of uses in the agricultural sector. It used to be that “AI” had the definition of artificial insemination, but in this case the new definition is certainly the one we are talking about. It can be used in drones and combined with computer vision for faster assessment of field conditions in order to prioritise integrated pest control. It can be deployed to monitor soil moistures on a continuous basis. It can simplify crop selection and help farmers to identify what produce will be most profitable. Another benefit is that AI can provide farmers with forecasting and analytics to help them to reduce errors and minimise the risk of crop failures. I know that Heriot-Watt University is doing work on that now.
As the minister mentioned, at the National Robotarium in Edinburgh a grain-surfing robot created by Crover is being developed to reduce loss as a result of mould and infestation. It is a unique burrowing robot that will be a real game-changer. In Norway, AI is being used to keep out invasive pink salmon by using facial recognition. Cameras are put in rivers at gates so that the gates open only for Atlantic salmon and thus keep out pink salmon, which are filtered into a different system and put back out to sea.
The University of Aberdeen and Angus Soft Fruits have teamed up to use AI as a means to boost fruit yield and allow growers to more accurately predict soft fruit yields. The system will bring together a range of information, including historical yield and weather data, weather forecasts and satellite imaging. The project partners say that the tool could save Scotland’s soft fruit industry, which produces more than 2,900 tonnes of raspberries and 25,000 tonnes of strawberries annually.
Scotland’s Rural College has also teamed up with NVIDIA to better integrate artificial intelligence into the fight against the bacterial disease bovine tuberculosis, which costs the country millions of pounds every year. The mid-infrared spectral data can now be analysed at 10 times the previous speed, which means that we can screen more cows.
There is enormous potential for artificial intelligence to improve all our lives. However, there must be incredibly tight and robust policies in place for the good of us all. We need to start now by focusing on how AI is already influencing our personal decision-making processes. That must be the right place to start.
As we have already heard from members, AI is not a new phenomenon, but advances in this technology allow computers to perform tasks that would otherwise require human intelligence. It can absolutely transform lives.
Only last week, we heard of breakthroughs in AI technology using algorithms to help Gert-Jan Oskam, a man who had been paralysed for 10 years, to walk again. That was made possible by a brain-spine interface—a wireless digital link between his brain and spinal cord. It allows Gert not only to walk but to stand up from his wheelchair when speaking with friends, allowing eye-level contact. The value of advances such as this to the lives of individuals is immense.
It is clear that there are advantages to be won from doing AI right, and the Scottish Government’s AI strategy, published in August 2022, shows a commitment from the Government to unlock the potential of AI in Scotland while also building a foundation of trust with people across the country. I think that, when it comes to ethics and trust, Scotland has the reputation and the experience to help to develop needed regulation. However, I am not aware that the Scottish Government currently has AI-specific internal policies and guidelines. How do we make policy and law in a world of AI?
In May, we saw hearings in the US Senate on the safety concerns around the use of AI. Sam Altman, the chief executive of OpenAI, testified before senators, largely agreeing with them on the need to regulate AI technology, which is becoming increasingly powerful. Indeed, along with a dozen other experts, he supported the statement that was published on the web page of the Centre for AI Safety, which read:
“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war”.
However, Mr Altman rejected the idea of a temporary moratorium on AI development beyond GPT-4, which was suggested in an open letter signed by 30,000 leading technologists, ethics experts and civil society activists.
Should he be the judge and the jury? If not, who should be? The questions that we are asking need answers—indeed, they needed answers before we got to this point.
Of course, it is autonomous AI that is the biggest threat. The Centre for AI Safety website suggests a number of possible disaster scenarios: AIs could be weaponised—for example, drug-discovery tools could be used to build chemical weapons; AI-generated misinformation could destabilise society and “undermine collective decision-making”; the power of AI could become increasingly concentrated in fewer and fewer hands, enabling
“regimes to enforce narrow values through pervasive surveillance and oppressive censorship”; and enfeeblement, whereby humans become dependent on AI
“similar to the scenario portrayed in the film Wall-E”.
Just as the world had to establish global nuclear non-proliferation agreements to help to prevent mutually assured world destruction, we need some kind of global AI regulation and control as a matter of urgency if we are to have universal trust and an ethical approach.
That would be for AI players above the wire, who are known and willing to be regulated, but what of those bad actors operating beyond grid, beyond control? Also, what happens when AI sub-contracts tasks—how can that be regulated and have safeguards?
As the use of AI expands, it is imperative that Governments across the globe work with business to ensure that we are also addressing safety concerns by having clear goals and a justification for using AI to achieve them. The use of personal data must be secure, and we have to address ethical issues that may arise, including bias and accuracy. That is probably where Scotland can have some influence.
On bias, when Amazon developed AIs to evaluate CVs, its intention was to find the best candidates. However, because the data that the programme was trained with was primarily CVs from male candidates, the AI was not ranking candidates in a gender-neutral way. How do we ensure that AI is fair in a world that is still unequal?
In terms of reaching net zero, computer scientists at the University of Aberdeen and at Aberdeen-based software company Intelligent Plant will use AI to develop a decision support system to tackle shortfalls in production and help Scotland to meet the target of 5GW of installed hydrogen production by 2030. They are working in partnership with the European Marine Energy Centre, and the project has been funded by the Scottish Government’s emerging energy technologies fund.
In the business community, Glasgow-based Changingday is using the technology to create immersive VR—virtual reality—experiences to enable autistic people to enjoy a new world of possibilities while helping them to cope with the real world.
It is clear that Scotland is harnessing the power of AI in our education sector, in business and in reaching our climate change targets, and that it can be a force for good. AI has the potential to deliver great things, but can it ever give us joy, passion and feeling?
ABBA has ruled out a 2024 Eurovision reunion in person on the 50th anniversary of its win, as Sweden once again hosts the Eurovision Song contest, but—who knows?—the very successful virtual ABBA Voyage tour performance could be recreated next year, perhaps with avatars and new songs. With AI, would we really know to whom we have to say,“Thank You for the Music”?
AI is inspiring but also threatening at the same time. It is the pace, scale, range and effect that desperately need to come under some kind of global regulation. We have to start somewhere, and we should already have started, but we certainly have to start now.
I thank all members for their very thoughtful contributions to the debate. I am sure that there is an awful lot for the minister and the Government to reflect on in the wide range of current examples of application of AI and the impact in history of technological transitions over time.
I think that the word “pervasive” was used by Michelle Thomson in relation to the scale of the challenge. That is something that I strongly agree with.
We welcome the fact that the Government is keen to engage, to review the position that it has taken and to draw from expertise as widely as it can. It is clear from the debate that there are concerns that the scope has perhaps been too narrow in terms of definition, and concerns about how the Government has sought to deal with the issue in the past. That is not a criticism, because AI is a growing field.
Rightly, a great amount of concern has been expressed in the media—as has been reflected today—arising from the rapid development of AI technologies. We—Parliament and Government—should be animated by the application and understanding of AI.
I want to focus particularly on issues around the education system. Questions about how and what we learn are really key. At the moment, the Parliament and the Government are considering how we will assess students in our education system. We have had an interim report from Louise Hayward that had little to say about the application of artificial intelligence in assessment processes, so I hope that her final report will have more to say in that regard. We have to wonder whether the proposals will stand up to the real imminent test of application of AI.
An interesting exchange took place between Willie Rennie and Martin Whitfield, which contrasted the rapid arms race of plagiarism software against the plagiarists. Martin Whitfield, as he always does, spoke to the intuitive power of the teacher. He is a better teacher than I am—I recall having to mark hundreds of exam scripts as a university tutor; the fact that one is paid by the script probably undermines the scrutiny that one gives to the application and depth of understanding of the individual students. With the whole system, we need to consider how we incentivise and ensure that we can cope with the application of the new and rapidly improving technologies.
I point to an exchange of letters between the Education, Children and Young People Committee, the Cabinet Secretary for Education and Skills and the Scottish Qualifications Authority. The response of the cabinet secretary to concerns around AI as it might be applied in education was slightly less than the committee and, certainly, I hoped for. That was mirrored in the SQA’s response, which did not seem to engage fully with the issue with the urgency that members from across the chamber have reflected, and ran counter to the ministerial intention to understand how we might manage the issues in reality.
Michelle Thomson used the word “deontological”, relating the debate to the necessity of understanding, in our approach, the moral underpinning of the choices that we make on the issues. There are practical concerns. We have, in order to understand what we are seeking to achieve, not only to consider whether the consequences might be perverse, but whether they come from base principles. That speaks to common concerns about the rules that govern AI and to questions around how we can approach concerns collectively and internationally. The idea that we certainly cannot do it alone has come through strongly this afternoon.
The broader concerns are reflected in questions about other areas, such as the shape of the economy. Many members mentioned the question of the kind of economy that we want to produce.
There are real concerns in relation to data as a form of wealth. We all produce data, but who exploits it? The gap between the data rich and the data poor—with the question about who has the ability to exploit that data—can exacerbate and cause ever-greater problems in relation to the shape of our society, so we would do well to think more on those areas.
I have already touched on the issue of technological transitions. We know that we are going through a rapid technological transition in our energy production, so there is a need to drive change in the area, which will have real human consequences for the jobs that people have, the shape of people’s lives, and whether they can earn decent livings to support their families.
I want to touch slightly on perhaps less-anticipated applications of AI, such as in the justice system. To illustrate the fact that those systems and processes are in play today, I note that DNA samples that the police collect in Scotland are deconvoluted by completely impenetrable black-box algorithms that are sold by companies, and different algorithms come out with different answers. A real challenge exists around transparency issues—a point that Daniel Johnson and others rightly raised—and the question of how AI actually works in the system.
AI is already used for triaging huge evidence sets, which are increasing as we produce different data streams that become part of the evidence. A significant challenge exists around the issue of disclosure between defence and prosecution and the way in which information is shared. Many of the algorithms are impenetrable; understanding them and having transparency is absolutely key.
I attended and contributed to a seminar at the Royal Society in London on the application of sentencing algorithms, which has happened in the United States. Many judges around the room expressed real concerns about the issue of potential bias in the system. It fell to me in that discussion to point out to the collected judges that the only black people in the room were serving the coffee. There are inherent biases in our systems as they stand, and those are not only reflected in the systems that are produced. We have to understand that we are not contrasting what happens with an ideal world; we have to test artificial intelligence in that regard.
We welcome the debate and thank the minister for securing it. We look forward to further updates from the Government.
I agree with Michael Marra about the quality of the debate. It was interesting that Pauline McNeill, Martin Whitfield and Ivan McKee all said that perhaps it has been good because there was no motion. I agree with that; it is quite pleasant to be away from the party-political ding-dong that goes back and forward all the time. Having no motion raises the tone; this debate has been a classic example of that.
I came to the debate with mixed feelings, and, having listened to every contribution—all have been interesting—my mixed feelings remain.
All the technological advances throughout history—in his opening speech, the minister mentioned the steam engine, and we have had mention of the telephone, the television and the computer—have come with a vast array of benefits. That is also true for AI. Fiona Hyslop made a poignant point when she mentioned the case in Switzerland in which, it was reported last week, a digital AI bridge has been used to decode the brain signals of a paraplegic person who can now walk again. There are so many benefits from AI in medical science, and it has transformational potential in patient care, as well as in the digital, gaming, space, diagnostics and—as Finlay Carson said—agriculture and fishing industries.
Michelle Thomson made the excellent point that, at the Finance and Public Administration Committee on Tuesday, when we took evidence on public sector reform, it was noted that AI has huge potential for such reform. That is much needed—not just now but for the foreseeable future—if we are to address the huge black hole between public expenditure and tax revenues. Therefore, we must be careful about resistance to AI.
I want to reference the editorial in last Saturday’s
Financial Times because it raised an important principle. The
FT editor herself wrote that “nothing matters ... more” to her
“than the trust of readers in the quality” of “journalism”—for “quality”, read “accuracy”, “fairness” and “transparency”—which is a refreshing thought from a senior editor. She said that generative AI is developing at “breakneck speed” with profound implications for journalism—both good and bad. She ends by saying that
“FT journalism in the new AI age will continue to be reported and written by humans who are the best in their fields and who are dedicated to ... analysing the world as it is, accurately and fairly.”
That is an interesting comment. She is making the point that the leap towards artificial intelligence is that bit more challenging because we simply do not understand it, as Willie Rennie rightly pointed out in his speech.
Pam Gosal said that we have to be mindful that there will be trepidation about the possible consequences that AI could bring if it is utilised by criminal or terrorist organisations, which I am sure is a concern for many members.
As with all technological leaps, there is no going back. Once Pandora’s box has been opened, or the genie let out of its bottle, the immense opportunities that exist have to be taken, but we must be mindful that there will be an uncontrolled spiral of competition that leaves only two options: adapt or be left behind.
They say that you cannot halt progress, whether that is the growth of the internet and the subsequent decline of in-person services and retail, the smart phone that has become an essential technological companion to us all over the past 10 years, or even—we are told that this is progress—the removal of phones from our desks here in Parliament in favour of Webex software, which is more challenging to me than AI chatbots. Technological developments always cause irreversible change; it is how we harness the change that really matters.
A very similar case to the growth of AI was the advent of streaming platforms for music at the turn of the century. That not only totally revolutionised the entire industry and how artists could generate income, but caused numerous legal challenges and ethical issues. We have, in the debate, spoken a lot about ethical issues, with members highlighting just what it means in the case of AI.
I mentioned at the start of my speech that I have mixed feelings, which is because, as Michael Marra has, I have been thinking a lot about how AI will affect education. During my teaching career, I was always very interested in how we use knowledge—not just in the knowledge itself. Education should always be about developing inquiring minds and building resilience, but if something does the thinking for the student or the teacher, it will undermine and potentially remove the process of inquiry, and there is a danger that it will make them lazy.
I cannot deny that l would have liked the idea of an AI chatbot when I was at school, perhaps to help with a troublesome essay, a differential calculus solution or whatever, but I do not think that it will be long before problems occur, especially as AI has sometimes been found to fail.
I will be very quick. I absolutely agree with what Liz Smith is saying, but I would perhaps go even further and say that I fear that the processes that one goes through in education in order to be able to apply judgment in decision making would be lost because, as she pointed out, that process is about much more than knowledge. Does she agree?
I agree very much. That was a very good point. If something does the thinking for us, there is a real danger that that will take away a lot of the judgment process that we have been used to. That would be a whole different ball game, especially in education. I fully understand the concerns of colleges and universities about the implications of that, which Pam Gosal mentioned in her speech. Michelle Thomson has made a valid point.
I want to finish on the question of ethics, which is an incredibly important aspect of all our considerations. We need to have control of AI. That will be extremely difficult, because we do not understand the journey on which we are embarking. Not only must there be proper legislative regulation; it is absolute necessary that Government and private companies continue to adhere to ethical standards and to uphold trust. I very much welcome what the minister said about taking a four-nations approach, because I do not think that we will get anywhere if we do not take such an approach.
AI is a very interesting area. We absolutely have to take it seriously, because it is the new world. We must get to grips with it, but we will be significantly challenged.
I thank members across the chamber for their thoughtful and often fascinating contributions to a debate on a subject that will affect the future of our country and our planet—AI will be utterly transformational. I listened carefully to the many views that were expressed. As Michael Marra said, there is a lot for the Government and me, in particular, to reflect on, given the number of good points that were made. We will certainly do that in the days and months ahead.
I was pleased that Daniel Johnson admitted to using ChatGPT to help him to frame his speech, which we all thought was unexpectedly good; it was good of him to explain why that was the case. I am jesting, of course, because it is a consensual debate.
If we drive by a lawn that has a robotic mower on it, we think to ourselves, “That’s amazing,” and we drive by. If we pick up the newspaper and read about a driverless bus on the Forth road bridge, we think to ourselves, “That’s amazing,” and then we turn over the page and move on, but ChatGPT has sparked a global debate—everyone is speaking about it.
What is the reason for that? In my opinion, the reason is that it is accessible. Millions of people can access the technology. In addition, as a species—as human beings—we are reflecting on what it means for us because it speaks to us and communicates with us as a human being would do.
It is quite incredible and also quite ironic that, while we are debating potential scenarios facing the planet and our societies in the decades ahead, and although we accept that ChatGPT and other forms of AI will not replace humans and has not exceeded human capability, in one sense, it has got one up on us, because we are all thinking, “We’re not quite sure how to respond to AI.”
Willie Rennie made a very important point when he said that, as politicians and as Parliaments, we must show humility. He is right. We must also act thoughtfully and continue to debate and to listen, both in and outwith the chamber. The Government has an essential role to play—that of representing the interests of all our people—but we do not have the answers, and that has been reflected in many of today’s contributions.
Is it not the fact that AI is the automation of decision making that we find so challenging? It speaks to something that many members have already commented on, which is the lack of transparency about how, and on what basis, decisions are made. That can be an innately fearful thing.
Yes, which takes us on to the debate about whether AI is trustworthy or can be ethical. Michelle Thomson and others mentioned Scotland’s Futures Forum’s recently published toolkit for looking at the issues. I thought that that was very valuable, as it got me thinking and flagged up issues that the Government and the public sector should think about regarding how to operate AI and use it effectively.
What we are experiencing just now, in Parliament and across the world, is a balance of excitement and fear. On the one hand, we are excited because we can see the potential for AI to improve our world, our quality of life and the Scottish economy and can see how the knowledge revolution can be used to improve education. We also have fears, because we can see threats and risks. We use the word “singularity” to mean the point at which a machine begins to think for itself and does not need human intervention, because it can develop its own intelligence. That is something that the human species will have to think deeply about.
There will be an impact on jobs, because AI can create jobs but also remove them. There will also be an impact on security and cybersecurity as countries, and bad actors, gain access to AI and can use it for nefarious purposes. We know that that is a deadly serious issue. Some members mentioned the arms race across the world to see who can get to the new technologies and use them first. We do not want the wrong people to get there first, because of the possible ramifications.
I touched previously on the point that data is the essential fuel that drives AI, which does not function without data. Does the minister believe that the Scottish Government’s current data policies are fit for purpose for the future and can maximise the advantages that AI brings? The Government is looking to develop a £92 million rural payments system. Does AI form part of the decision making about that?
We must think about how we manage and access data in this country. We are debating this subject today because we are not quite sure what the future is, so it is difficult to answer that question. We must constantly evolve and adapt as we learn about the consequences and potential of AI. That is really important.
Willie Rennie mentioned the importance of politicians and Parliament having good advice, which is why I am pleased that we have the Scottish AI Alliance, chaired by the very talented Catriona Campbell. She is an expert in human-computer interaction and a successful entrepreneur, and she has a number of incredible jobs not only in Scotland but elsewhere in the UK. She is the new chair of the Scottish AI Alliance and, as I said in my opening remarks, we are asking the Alliance to review where Scotland is with AI and to look at the potential for our economy as well as at how we can manage and deal with the risks of AI.
I must give a wee plug for the book “AI by Design: A Plan for Living with Artificial Intelligence”, which Catriona Campbell published last year. I saw her yesterday at the Data Lab in Edinburgh. A Scot has written that book and it is worth a read. I did my best to get through it last night after she gave me a copy in preparation for this debate. The book goes through the various challenges and opportunities facing Scotland and looks at the wider debate across the globe.
The question of jobs has been a key feature of the debate. Clare Adamson and others spoke about how people were fearful of losing their jobs during the industrial revolution and of how old jobs were lost and new ones created. That is the story of history. The Luddites, who worried about the impact of textile machinery on their livelihoods, were mentioned. We must make sure that people are equipped for AI in their current jobs where that is possible and we must ensure that we, as a country, have the skills to create new AI jobs and new employment opportunities.
The minister makes a good point about being prepared, but part of the job of Government is to make sure that we have the required skills. We have raised time and again the declining numbers of young people who are taking STEM subjects in secondary school. Reversing that trend must surely be an absolute priority for the Scottish Government if we are to be able to cope with the situation.
Again, that is an important point, and it is something that the Government and Skills Development Scotland are addressing. In that context, I want to mention Ivan McKee, who said that computing science is a concern of his, as it is for other members in the chamber. Our chief entrepreneur, Mark Logan, mentioned in a recent meeting that he wants to see more support for computing science teachers so that we can meet the needs of the future Scottish economy. We have shortages at the moment. That is important and we have to look at it more seriously. I am up for that, as are my colleagues in the Government, and people in the computing science profession are working together to try to address that in our schools at the same time.
On the subject of Ivan McKee, I want to pay tribute to him. We have many of the building blocks in place in Scotland to make sure that, as a nation, we are ahead of the game. We are one of the leaders in the world in exploiting AI for the benefit of society, jobs and economic growth in our country. Ivan McKee is not responsible for all the building blocks that are in place, but he has played a role over the past few years and I pay tribute to him for that.
Yesterday, I was at the Data Lab, which is based at the Bayes centre here in Edinburgh. I know that Brian Hills, its chief executive officer, is in the public gallery today. Even though I had been before, I was amazed—again—by everything that I learned is happening on our doorstep, not just in Edinburgh but in other cities and communities across the country, with the research and the developments that are taking place. We should be proud that we are certainly in the lead in making the most of AI to improve our society.
I do not have much time left, but I want to mention the fact that AI has the potential to transform our lives. It is already doing that, but it will do so much more in the future, transforming our economy and delivering enormous benefits. I will give a couple of examples of what is happening in the NHS. First, NHS Greater Glasgow and Clyde’s osteoporosis treatment identification using machine learning—OPTIMAL—project is investigating the use of AI to detect osteoporosis early. Secondly, at the start of May, the Beatson west of Scotland cancer centre started using an AI-enhanced linear accelerator to conduct better-targeted, personalised and adaptive radiotherapy.
There are many other examples of work happening in hospitals that is using AI to detect cancer and treat it early in all kinds of ways.
It is incredible. It is amazing.
AI has a lot of potential to improve our lives and support our economy through economic growth, but it is really important that we get the ethics right, ensure that it is trustworthy and manage it as a Parliament and as a country. We must ensure that we make the right decisions and work on the global stage with the UK Government, our colleagues in Europe and the international institutions to get this right in the interests of humanity.