We need your support to keep TheyWorkForYou running and make sure people across the UK can continue to hold their elected representatives to account.Donate to our crowdfunder
I am pleased to be in the chamber today to debate the topic of artificial intelligence and data-driven technologies and the opportunities that they offer to the Scottish economy and society, as well as the challenges that they pose. It is an important topic, which increasingly touches many aspects of the lives of our constituents.
The Scottish Government has a vision to
“Use Scotland’s data to its full potential by driving innovation, improving public services and unlocking economic value—saving time, money and lives”.
We are mindful that data innovation can benefit the Scottish economy and improve the productivity and efficiency of organisations, including those in the public sector. It can also attract new businesses and highly paid jobs. In delivering our commitment in the programme for government to develop an AI strategy, we are trying to ensure that Scotland maximises the potential economic and social benefits of AI and sends a strong signal to the world about our ambition.
However, we also recognise that AI raises several important issues that need to be addressed urgently to ensure that it is used ethically and that people in Scotland can benefit from the changes that it will bring to how we live and work in the future. Our intent is therefore to develop a strategy that has the citizen at its heart and the benefits to the citizen as its core guiding principle to ensure that no one is left behind and the strategy is aligned with the national performance framework. Over the next year, the Government will work with the public, industry, public bodies and organisations, academia and beyond to set out Scotland’s ambitions, principles and priority actions on AI and a route to securing public support as the precursor to realising economic, social and environmental value.
Today is an opportunity to start that national conversation in the Scottish Parliament and I look forward to engaging with members across the chamber. I am sure that we will have a robust debate because, on the one hand, there are potential benefits and, on the other hand, there is a need to have a debate to ensure that citizens are at the heart of the strategy and are not left behind and that some of the negatives around AI do not cause them to feel concern and fear. I also hope that we can agree on the fundamentals that will enable the opportunities of AI and data-driven technologies to be realised for Scottish society and the economy, including having a strong ethical underpinning that has public support, as outlined in the motion.
There is no commonly agreed definition of AI among experts. For the purpose of the debate, we can think of AI as a set of techniques that are used to allow computers to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, translation between languages and decision making. Everyone will have heard media stories about AI that promise either utopia or dystopia but often both. AI is very much a misunderstood revolution and unwarranted hype and fears obscure the real opportunities that we need to seize and the real challenges that we need to overcome.
Absolutely. I do not disagree with that, nor would I say that all the hype is unwarranted either. That is why in the debate and in the strategy we are trying to recognise that, although AI is happening to us and we cannot stop it, we need to have a debate about how, with the powers that we have, we can have an ethical framework that puts citizens at its heart and materialises in the regulations and decisions that we make on AI. There are both potential benefits and risks, and we cannot downplay either.
What should be clear to everyone, though, is that, in many ways, AI is already in our lives and our homes and it is here to stay. Data-driven technologies more broadly are having an increasingly large impact on almost all aspects of human activity and every sector of the economy, which means that we cannot afford to ignore them or the opportunities and challenges that they bring.
It is of course notoriously difficult to make predictions about the future, but experts agree that AI and data-driven technologies have the potential to boost Scotland’s economy as well as create opportunities for society. For instance, PricewaterhouseCoopers estimates that the benefit from those technologies could be worth over £16 billion in 2030, which would be 8 per cent of gross domestic product and would provide £2,000 of extra spending power per household annually.
I agree that there are clearly economic opportunities, but to take them our people need to have the right skills. In that regard, can the minister comment on recent data that shows a decline in the number of our young people leaving school with qualifications in science and mathematics?
Daniel Johnson is right to identify the need for skills in that area, but it goes much further than that. Although we need people to have the skills to be able to be part of the development of the technology, an even bigger challenge is ensuring that we have the skills that will add value in the next few decades when things such as AI replace jobs, which is one of the risks that we need to have a robust conversation about. In particular, softer skills are critical, because they will not be replaced by artificial intelligence and robots.
There are two challenges. We have to ensure that we have digital, computing and mathematics skills. I take digital skills seriously, and I can talk about a number of strategies that we are deploying to boost such skills. One of the key discussions that we need to have is how we prepare our young people, in particular, to have the skills of the future when jobs as we know them will be significantly changed.
It is important to recognise that the economic value of data to Scotland lies not just in the creation by tech companies of new products and services but in their adoption across all sectors to improve the competitiveness of our companies and create even better public services. For several years, we have been working to realise that opportunity through a number of data-driven initiatives and projects.
For example, the Scottish Government, the national health service, the Scottish Ambulance Service and other partners have come together to tackle the deadly issue of out-of-hospital cardiac arrest. Every year in Scotland, more than 3,000 people have a cardiac arrest while not in a hospital, and fewer than one in 10 survive, with people in deprived areas having the worst outcomes. By linking our data, we have been able to understand the issue in unprecedented detail and track progress against our goals, and now more patients receive bystander chest compressions, and more survive.
There are other examples of data being used more smartly to solve the challenges that we face.
On the argument about the increasing capabilities that we will have if we share data in a smart way—whether in Government and public services or in the private sector, which is hoovering up a huge amount of data and metadata—surely there is a need for us to engage with the problem of consent. How do people give meaningful consent to the ever more complex and sophisticated and bewildering relationship between their data and public or private sector power?
There are two answers to that, neither of which is simple or straightforward. The first is about raising awareness and having a genuine discussion about what people’s data is currently being used for. There is a significant misunderstanding about data in public discourse. On one hand, there are fears about how data is being misused but, on the other, there is a conversation about how data can be better used, particularly in the public sector. Two things have brought that to the fore: one is the roll-out of the general data protection regulation, which has reminded people about the importance of consent; the other is the debate about the misuse of data, particularly by tech giants, which has been highlighted over the past few years.
The second answer is that we have to grapple with that right across the public sector, which is where we have control, where data is being shared with medical services or other organisations, as people are being actively asked for their consent and being informed about how that data will be used before it is used.
None of the benefits or the promise that I have identified—whether in the public sector or elsewhere in the economy—will materialise if we do not acknowledge and address the challenges that those new technologies bring, some of which have been outlined. AI raises new ethical issues about using people’s data, as Patrick Harvie’s question suggested, and making decisions that affect them. We want Scotland to continue to lead in the safe, secure and responsible use of data for social and economic benefit. We want to build and maintain public confidence in that journey and we want our businesses to benefit from the opportunities that innovation affords.
Yes—very much so. A key plank in the development of Government policy in that area has been data and our discussions about the ethical elements, as well as the opportunities that come from using data better. The data delivery group, which was established earlier this year, helps to inform that discussion about how data is used.
At the DataFest summit, which was held earlier this year in Edinburgh, the First Minister stated:
“Using data ethically isn’t a barrier to using data effectively. It is a prerequisite for it. It is the only sustainable way of maintaining public trust”.
We do not view the question of ethical AI as a zero-sum trade-off between the interests of citizens and economic interests. Instead, we suggest that trustworthy, human-centric AI is the prerequisite and foundation for realising the full economic benefits of AI. We will investigate how it could be a competitive advantage.
When developing Scotland’s AI strategy, we will broker an honest, meaningful dialogue between the people of Scotland and all relevant stakeholders about AI’s role in their lives and the concrete actions that will be taken to address specific issues—not just a set of well-meaning abstract principles. We will do that openly, transparently and in partnership, using the Scottish approach, as we did with the national performance framework. In that spirit, we have commissioned the Data Lab and the Democratic Society to lead that work, bringing together their respective track records of data-driven innovation and fostering participative dialogue in Scotland.
I could go on, but I will come to a conclusion, because other members will raise other points.
At the heart of that work is putting citizens first and ensuring that citizens and their needs shape the AI strategy.
That the Parliament recognises the potential of artificial intelligence (AI) and data technologies to disrupt every sector of society; notes that AI and data can improve economic, environmental and social wellbeing in Scotland if it is underpinned by a strong ethical framework for the way these technologies are used, and a national data infrastructure that allows data to be shared appropriately for the benefit of the public, and considers that, through development of an AI strategy, Scotland has the opportunity to be an international leader in data technologies in a way that enhances the country’s reputation, safeguards citizens’ rights, secures access to fair work and brings new jobs and investment to Scotland.
Today’s debate on artificial intelligence and data-driven technologies is important. At decision time, we will support the Scottish Government motion.
We have lodged an amendment to the motion in order to highlight the significant opportunities that are available for the development of Al in Scotland through the United Kingdom industrial strategy. I will come back to that later.
The minister opened the debate by emphasising the massive opportunities and challenges that will arise from artificial intelligence. She provided an update on initiatives in that area, including the development of a nationwide strategy. We welcome those initiatives.
We also recognise the vital cross-sector collaboration by key stakeholders in the area, including the Royal Society of Edinburgh— Scotland’s national academy—the Scottish Council for Development and Industry, ScotlandIS and BT Scotland, whose valuable joint report looked at the future impact of these technologies in Scotland. We also recognise the significant work of the Data Lab in pioneering the nationwide strategy to which the minister referred. If Scotland is to fully realise our economic and social potential in the fourth industrial evolution, that collaborative, cross-sector approach will be essential.
Artificial intelligence is a massive subject, which spans from what Elon Musk, the founder of Tesla, has described as the “single biggest existential crisis” that humanity faces, to what Stephen Hawking described as a “new form of life” that will outperform humans. However, for the purpose of today’s debate, I will focus on the transformational impact that Al will have on every aspect of Scotland’s economy.
As the minister said, PwC estimated that Al technologies could lead to the creation of more than half a million jobs in Scotland and that it could add more than £13 billion to the economy. Machine learning is already driving revolutions across a number of sectors by unlocking the predictive power of large data sets. In sectors such as healthcare, machine learning is used to generate predictive outcomes for NHS patients, resulting in a transformation of treatment options.
I very much agree with what Dean Lockhart is saying, but does he also recognise that it is important to get a representative set of people and occurrences for the input for machine learning? Too many machine-learning activities have turned out to be limited in cultural and ethnic diversity and therefore could potentially give misleading guidance in the future. That is an important issue that we would all wish to consider.
Stewart Stevenson makes a very good point. The outcome of predictive measurement is only as good as the underlying data. Machine learning can help to improve the accuracy of the data that goes into the process in the first place, but the point is well made.
As PwC said, the impact of AI will result in the
“biggest shake-up in a lifetime” to Scotland’s labour market, which could result in the displacement of almost 540,000 jobs—almost as many as might be created. During last week’s business in Parliament conference, we heard that that displacement is already taking place in professional services, including in the legal, accountancy, architecture and design sectors.
The joint report on AI that I referred to earlier stressed the importance of recognising that the different technologies involved in AI are at different stages of maturity and levels of sophistication. That is why it will be important that we have a national strategy that recognises the multilayered and complex opportunities and challenges associated with those technologies. The joint report makes a number of recommendations on the key actions required to underpin the national strategy, and I will set out our approach to them.
First, we agree with the report’s recommendation for the introduction of a scheme to teach a growing percentage of people in Scotland the basics of AI, which would be modelled on Finland’s 1 per cent scheme. That recognises that a population educated in the basics of AI will be better placed to embrace those technologies. It will be essential that knowledge of the basics of AI extends to schoolchildren as well being part of lifelong learning for adults. The problem that we face, which was alluded to by Daniel Johnson, is that we have seen a decline in the number of maths, science and computer science teachers in recent years and, when it comes to lifelong learning for adults, we have also seen a decline in the number of part-time college places that are dedicated to science, technology, engineering and mathematics and digital subjects. We need to make sure that no one is left behind and that Scotland’s population is educated to take advantage of new technology, so it is vital that we address that underinvestment in education and lifelong learning and get the basics right.
The second key recommendation of the joint report is for the establishment of an independent advisory body to explore the potential for AI technologies and to look at skills development and funding in the area. Again, we agree with the recommendation. Time and again, the Economy, Energy and Fair Work Committee has heard evidence of a digital gap in Scotland’s business environment. Only 9 per cent of businesses in Scotland embed digital in their operations, compared with 43 per cent in other countries. That digital gap must be overcome if we are to take advantage of AI, which is why we have been calling for the establishment of a dedicated institute of technology and e-commerce—a specialised support agency for Scotland that would help large and small businesses across the country to take advantage of opportunities in digital, data and AI. I look forward to hearing the minister’s response to that initiative, because it has gained significant support in the business community.
Another central recommendation of the joint AI report is for Scotland to participate actively in the UK industrial strategy. The UK AI sector deal will place the UK at the forefront of the artificial intelligence and data revolution. The UK will be leading the world in the safe and ethical use of data through the new centre for data ethics and innovation. A good example is the robotarium and the ORCA—Offshore Robotics for Certification of Assets—hub, which have been developed at Heriot-Watt University in collaboration with the University of Edinburgh and have received significant funding from the UK research and innovation fund.
I welcome the fact that the UK Government is at least exploring the ethical dimensions of the issue. Does the member feel comfortable with its current position, which is that it is willing to invest in the development of autonomous weapons systems, but is not willing to support international attempts to regulate lethal autonomous weapons? If we are looking to develop such things, surely we should be regulating them at the same time.
Thank you very much, Presiding Officer.
The point is that regulation of AI has to be done on a multilateral, multinational basis. The UK Government is talking to countries around the world to look at the right regulatory framework to cover the issues that Mr Harvie has raised.
The final recommendation in the joint report that I want to address is its recognition that although Scotland has global centres of research excellence in Al technologies and data, Scottish institutions are not commercialising such innovation. In a number of recent debates, we have heard about the need to do more to support universities and colleges in commercialising their R and D and innovation activities. That is why Scottish Conservatives have highlighted that the cut of more than 11 per cent in university funding that has taken place in Scotland over the past five years is not the right direction for the country’s education policy to take.
Scottish Conservatives will support the Scottish Government in developing a new strategy for AI and data technologies, but that cannot be viewed as a stand-alone piece of work. If we are to realise Scotland’s potential, work is urgently required to get the basics right in education, business support, innovation and closing the digital gap. On all of those issues, we call upon the Scottish Government to work together with the UK Government under its industrial strategy.
I move amendment S5M-19822.2, to insert at end:
“; notes that the AI Sector Deal under the UK-wide Industrial Strategy will place the UK at the forefront of the AI and data revolution, by investing in research and development, skills and regulatory innovation, supporting sectors to boost their productivity through AI and data analytics technologies, and leading the world in the safe and ethical use of data through the new Centre for Data Ethics and Innovation, and encourages the Scottish Government to work with the UK Government under the Industrial Strategy to take full advantage of the opportunities available under the AI Sector Deal.”
Although the debate is likely to be interesting, it is yet another on one section of the Scottish economy. Developments such as AI do not happen in a vacuum; they have an impact on other areas of the economy. In order to reap their economic benefits for all our citizens, we need to have an industrial strategy that will co-ordinate our response—one that will pull all our economic levers together and ensure that we create the right conditions to maximise their impact.
Artificial intelligence and data-driven technologies have a lot to offer. They are already changing the way in which we work and live: from Alexa to driverless cars, a huge range of AI technology is already in use. It has the ability to change lives for the better—but it will definitely change them, so we need to make the right choices now to ensure that we will all benefit.
I recently attended a conference in the Western Isles on the subject of dementia. One of the speakers there was very knowledgeable in that area. When he developed dementia, he put that knowledge to good use. He is now planning for his future to be spent at home, maximising his independence by using technology. He already uses Alexa to order his shopping from a local store—not a large supermarket chain, but a small, privately owned shop. He is adapting his fridge to allow his wife, who works away a lot, to see what is there and to order what he should be eating, while, with the help of Alexa, he will continue to order what he probably should not be eating. That allows him independence, but it also allows others to look after him. He does not like being told what to do, so he has programmed Alexa to suggest what he should do and why—thereby using persuasion rather than demand. He knows how artificial intelligence works and programmes it to suit his needs. He admits that positivity was not his first reaction to his diagnosis—as anyone else would be, he was absolutely devastated—but he is now positively taking control of his future.
Such technology will become more and more available and will help people with all kinds of conditions to live their lives more independently. They will not all need to be experts, but they will need the skills to ensure that AI meets their needs. Of course, a note of caution should be sounded: such technology is not like a human being and can do only what it is programmed to do—it is a tool, not a human substitute.
People talk about jobs being lost to AI and robotics. That should not happen, but I recognise the danger of our allowing it to do so. I have never found a form of technology that has allowed me to work less; I have always seen it as allowing me to do more. I believe that that is true of all new technologies. The danger lies in people not keeping up their skills and training so that their jobs are easily replaced by machines that they do not know how to operate.
That point is made in the briefing from the Royal Society of Edinburgh and its partners. They state that there would be a net increase in jobs and that
“it would also be ‘the biggest shake-up in a lifetime’ to Scotland’s labour market”, with
“the displacement of 544,000 jobs”.
They suggest that priority therefore has to be given to retraining people who will be displaced and ensuring that our education and skills development is fit for the future. Other members have made that point.
That brings into sharp focus the importance of lifelong learning. We have lost a third of our places in further education under this Government, yet further education is where those skills will be learned. The speed of change means that we require constant upgrading of skills.
Close the Gap also sent us a briefing, which points out the underrepresentation of women in the technical sector, where only 23 per cent of the workforce are women. That not only affects women’s earnings, but affects the products that are made by the sector and their suitability for women, and that not only disadvantages women in the technical sector, but impacts on the jobs that they can do in every sector where products and tools are designed for men. The smartphone is an example.
We cannot afford to leave people behind, and that is why this debate must not happen in a silo. It must happen as part of a wider debate around an industrial strategy. We need to look ahead at new technologies and how to develop them and maximise the knock-on jobs, putting the technologies into practice and ensuring that we have a workforce that is ready to do that. Very soon, every job will require skills in technology. The pace of change is incredible and it is speeding up. We need to ensure that our workforce keeps pace with that.
This is not just about work; it is about every aspect of life. We read about the technology that will bring about driverless cars, which is fast approaching, but there are already cars that react to differing road conditions to make them safer, from reacting to icy conditions and ensuring that the car slows down to ensuring that, when the driver brakes hard, it is enough to stop the vehicle in an emergency.
All of that will lead to improvements in our standard of living, but we all need to be part of it and to benefit from it. AI will have the ability to create a larger gap between the haves and the have-nots if we do not take steps to address that now, and preparing the workforce for the change will determine whether we can all benefit. The UK industrial strategy rightly encompasses AI, but we do not have a Scottish industrial strategy at all. The Scottish Government needs to move away from ticking boxes and silo thinking. It needs to produce an industrial strategy that ties all our economic levers together, ensuring that we have a plan to maximise the positive impact of the changes in our industrial base.
I move amendment S5M-19822.3, to insert at end:
“, and considers that this approach should form part of a wider Scottish industrial strategy, ensuring that all of Scotland’s economic opportunities are not only secured but also coordinated across the country in order to safeguard the labour market from widening existing gender and economic inequalities from AI, and striving to achieve inclusive growth in the process.”
I welcome the fact that the debate has been brought to the chamber. Politics can be very short term. Perhaps especially during an election campaign, we are all a little bit guilty of looking just at what is immediately ahead of us. As a society, we need a chance to have a forward-looking debate about what the future has in store for us, as well.
When Parliament debates things such as digital participation, we sometimes focus only on the positives—how many people are online and how fast their broadband connections are—instead of thinking about how we are using the technology and how it is changing society. That is not a criticism of any one party or of government, as opposed to the private sector; it is something that we are doing as a society.
Technological change constantly forces us to think differently about how we will deal with the new opportunities and challenges that lie ahead. Einstein wrote, in his time, that
“Today the atomic bomb has altered profoundly the nature of the world as we knew it, and the human race consequently finds itself in a new habitat to which it must adapt its thinking.”
I defy anyone to suggest that the digital world and the prospect of artificial intelligence will not alter our world every bit as profoundly. Our thinking rarely keeps pace with the changes around us, and we are only just beginning to come to terms with thinking about a connected and networked world.
When I was a kid, I read science fiction stories about the idea that we would all have a device like my smartphone, with which we could, at the touch of a screen, communicate with any person anywhere in the world and access the sum total of human knowledge. It was a utopian idea, and I never dreamed that it would unleash the social-media bin fire that we now live in, or of how it has opened up opportunities for unscrupulous people to hack our democracy.
We need to begin to think about such issues, and we have a great deal of catching up to do on new developments, including AI. An open question faces us all: will artificial intelligence be a tool to help us all to expand our capabilities and intelligence, or will it become a way for us to outsource our intelligence, our thinking and our human agency to technology that we do not really control?
So much of the development in AI is being done by the private sector, which is focused on the opportunities and the economic benefits that it might gain, but not so much on the potential downsides for society.
Some of the biggest challenges might come from possibilities that we cannot predict and from questions that we do not even know how to ask, although it is necessary that we do our best to do so. Having this conversation is not a rejection of the positives. I see more upsides than downsides, but if we are to truly maximise the social benefit that technology offers us, and minimise the risk of harm, the conversation is necessary.
I was, therefore, not happy to see the debate being framed purely in terms of opportunities, so I lodged an amendment that sets out some of the risks. I welcome the work that is under way and that the Scottish Government and the UK Government have tentatively begun to do. There is recognition that we need an ethical framework, but we also need to acknowledge that we do not yet have it, even at the theoretical level, and that even if we achieve it at the theoretical level, we are still far from having the regulatory tools that can enforce such a framework. I was interested to look at the Data Lab’s website on that work, but there is no mention of what an ethical framework might encompass.
The location of the tech companies is only one of the problems. The fact that some of them appear to be run by sociopathic billionaires is a much deeper problem than their mere location.
However, the minister is right to recognise that we need to reach out to others around the world. As I suggested to Mr Lockhart, instead of so vociferously resisting and trying to block international regulation on autonomous lethal weapons, the UK Government could be reaching out and trying to find out how we could achieve that regulation. Most countries around the world want regulation of and legislation on a pre-emptive ban on lethal autonomous weapon systems. The UK is one of a small number of countries that are vociferously resisting that move. There is scope for international work and for deliberative work; the use of citizens assemblies might be one way of engaging the public in the wider discussion.
The UK Government’s centre for data ethics and innovation has been referred to, and I welcome the work that it has done; I have looked at some of the papers that it has published so far. I might have time in my closing speech to go into some of the positive ideas and opportunities that it has identified, and into quite how far the centre is from identifying solutions to problems that it has recognised.
There are already elements of an ethical framework in place. The general data protection regulation includes rights for the data subject on autonomous individual decision making: people have the right not to have decisions that have legal effects on them made by purely autonomous processing, including profiling. However, we have no way of knowing whether that is happening to us, and we do not have the tools to allow us to protect ourselves from it. As others have mentioned, the Close the Gap paper that has been circulated to members introduces arguments to do with the social bias that could be inherent in those systems.
In terms of economic justice, we have to ask who owns the tech. In terms of security, we have to ask who watches the watchers and where the human agency is. Power has to be accountable, but we do not yet have the tools to make the power that is embedded in AI truly economically or democratically accountable.
I move amendment S5M-19822.1, to insert after “benefit of the public”:
“; recognises that both Scotland and the wider world are yet to meet these preconditions and, in particular, that the development of an ethical framework requires significant debate as well as robust enforcement mechanisms, which are currently absent; further recognises that the concerns that relate to the use of AI include social bias in automated systems, unjust distribution of economic benefits, the integrity of democratic systems, safety and ethics in automated defence and security systems, privacy and the lack of human agency in situations requiring whistleblowing or challenge to corporate interests.”
I am grateful to the minister for securing time for the debate. I am not convinced that any of us understand the full magnitude of the changes that Al and data technologies will make to our lives and the lives of our constituents. The world around us is changing at an unprecedented rate—some people call it the fourth industrial revolution. Just today, the first autonomous bus trials were announced for my constituency, in which commuters will be taken from Ferry Toll in North Queensferry to Edinburgh Park station.
The internet has, in the past two decades especially, utterly changed how we interact with one another, how we work, how we shop and how we travel. Al and advances in robotics will do the same over the next two decades.
We need to welcome the advent of new technologies and the opportunities that they bring. However, people who do not have adaptable skills could be badly affected by a rapidly changing economy, so we need to prepare for that now.
The Office for National Statistics says that 1.5 million workers in Britain are at “high risk” of losing their jobs to automation. Women and part-time workers would be most affected. More than 25 per cent of supermarket checkout assistants have gone since 2011.
This morning, we learned that hundreds of jobs in Edinburgh could be lost to automation over the next three years. The Phoenix Group is one of Edinburgh city’s biggest private sector employers, and it is reported that 500 jobs could go as a result of work being transferred to Tata Consultancy Services. I would be grateful if, in closing, the minister could advise us whether the Scottish Government has had any discussions with the company about that plan.
Alongside financial services jobs, manufacturing, retail and transport jobs are among those that are at risk. New technologies can create high-paid, high-skill jobs, or they can replace jobs and turn us into low-wage drones.
Liberal Democrats have long argued that in order to cope we will need massive investment in education, skills and training. Few people nowadays have just one career; automation means that that could be the case for many more of us. The ability to retrain and learn new skills at every stage of life will become ever more important. Some 80 per cent of primary schoolchildren will do a job that does not yet exist, so it is vital that we gear up our skills economy for that reality. The Liberal Democrats want, for example, to repair Scotland’s colleges and replace the lost 140,000 places. Opportunities have evaporated for people who can study only part-time.
In the future, caring responsibilities, or the need to keep working, must not exclude anyone from developing their skills. Scotland will need to make the most of the diverse talents of all our people. The best way to build a high-wage, high-skill economy for the long term is to invest in their talents and wellbeing. That starts with the core skills of logic, verbal reasoning and creativity being learned at school, or even earlier.
Getting it right in the education system will help the United Kingdom to lead the world in the development of inclusive Al and automation. Staying ahead will need huge investment in research and development.
Innovation needs to happen within a framework. It needs to be ethical, as the minister said, and it needs to respect people’s fundamental rights—not least, the rights to privacy and non-discrimination. We have already seen how technologies can move much faster than legislators and policy makers. Our laws are in the closed-circuit television era, while authorities and companies deploy facial recognition technology and more.
The amount of knowledge and information that are at our fingertips is mind-boggling, but it appears that misinformation is a bigger problem than ever before. In just a few short years, technology has utterly changed the landscape of elections across democracies. We are still struggling to get to grips with Twitter trolls, Twitter bots, fake news, huge volumes of paid advertising and even election tampering.
Trying to deal with the implications of tech and data misuse after the fact, in the absence of proper legislative frameworks, leaves people entirely vulnerable.
Left unchecked, every great liberating change will bring terrible risks and problems too. As my federal leader, Jo Swinson, put it:
“New technologies can help us make better decisions, or they could embody the worst of human thinking. Artificial intelligence”, by its nature, “learns from us”.
One system for predicting reoffending that is used by judges, the police and parole officers in the United States has been proved not to be colour-blind. Black defendants who did not reoffend were nearly twice as likely to be misclassified as higher risk than their white counterparts.
Algorithms can discriminate, too. That is why Liberal Democrats have proposed introducing a Lovelace code of ethics that would ensure that use of personal data and of Al are unbiased, transparent, accurate and respect privacy. All courses relating to digital technologies should teach ethics and there should be a kitemark for companies that meet the highest ethical standards. That would help people to make informed choices about to whom they give their money and data.
That is why we have some sympathy with the Green amendment—we believe that on the global stage, when it comes to warfare and the deployment of the military industrial complex, we need a new version of the Geneva convention to recognise weapons-grade technology in the AI world.
In conclusion, Al and data-driven technologies present huge opportunities for our economy and our society. I am optimistic and positive that they can help us to build a brighter future. They can make our world a better place, but there is a huge amount of work to be done by both Scotland’s Governments to ensure that they do not leave people or ethics behind in the process.
I am pleased to be able to speak about such a fascinating topic once again, having led a members’ business debate early last year on artificial intelligence. At that time I spoke about concerns that Al technology
“will destroy jobs and, indeed, entire industries faster than it creates them”, and, in some sectors, would enable a few companies to have a monopoly over the market by harnessing this new technology. That concern is still very much alive. Japan’s Henn na Hotel is almost entirely run by robots. A law firm in Chicago has an AI legal assistant named Ross who deals with bankruptcy cases and gets smarter with every case he deals with. Some Swedish and Italian care homes have now had their healthcare staff replaced by robots. Strong ethics and governance are crucial: we have heard reports that in China, facial recognition technology is being used to oppress the Uighur population, with shades of “1984”.
Can it really be that bad? Is every job now at risk, including blue-collar jobs and those of healthcare professionals and lawyers? With Stanford University developing an algorithm that can identify thousands of features from pathology images of lung cancer tissue, and casinos using Al rather than people to detect play and betting fraud, it is easy to agree with those who say that Al is already delivering a major shift in how people live their lives. To an extent, they are right; it is the beginning of a huge change to what we traditionally call work, but maybe it is time for Scotland to rethink what work is. We hear a lot these days about data-driven innovation, but what is it? It is very easy for experts to throw around these buzz-words, but when it comes to artificial intelligence, it is important to be clear. Data-driven innovation is reflective of the rising importance of data in economic growth, public services and social change. High-speed data analytics are used to capture and understand data trends, which brings, according to the University of Edinburgh,
“a better and faster capability to identify trends and behaviour across many sectors, leading to improved services for customers and citizens”.
That brings me to the data-driven innovation initiative. That fantastic initiative, based in Edinburgh and south-east Scotland, is being implemented over 10 years by experts from the University of Edinburgh and Heriot-Watt University. Their experts will collaborate on projects in the public, private and third sectors whose benefits will potentially be huge. Not only will there be opportunities for an increase in the contribution of university research and much sought-after graduate skills to regional economies; there will be the opportunity for jobs to be created through the launch of spin-off companies. Start-ups and established businesses alike will be attracted to Scotland and public and private sector investment will be driven up as a result, which can only be good for Scotland. Although there are still understandable concerns that the advent of Al will mean the net loss of jobs and the robotisation of the jobs that are left, this new technology can and will create and sustain jobs in other sectors of the economy.
The number of people employed in some sectors will undoubtedly contract, as happened during the agricultural and industrial revolutions—as always happens with technological change—yet it is likely that more jobs will be created and that those will be more highly skilled and better paid.
In 2017, Heriot-Watt University’s Edinburgh campus introduced the information and computer technologies and robotics for independent living laboratory, which is essentially an entire flat that mimics a real home environment. The laboratory combines a network of wireless sensors, other devices with an internet connection and state-of-the-art domestic robots.
What is interesting about that project is that, in it, computer and robotics scientists work with health experts, sociologists and psychologists, as well as people who have assisted living needs, in order to find globally applicable solutions. Apart from the obvious advantages and benefits that the project will have for people with such needs, it will also create employment in ways that we have not yet imagined—and that is just one scenario.
September of this year saw the launch of the Scottish Government’s first Al strategy, a vision for how Scotland can unlock the full economic and social potential of artificial intelligence. Last year, I mentioned the report published by the Scottish Council for Development and Industry, ScotlandIS, the Royal Society of Edinburgh and BT Scotland, which was called, “Automatic... For the People?”
Following that, this year, a new and equally excellent report was published, which was called, “Building a World-Leading Al and Data strategy for an Inclusive Scotland”, which explores how Scotland can put itself at the forefront of innovation and development in this crucial field.
Given our flourishing technology sector and the industry-leading minds in such institutions as Heriot-Watt University, Scotland will surely attract other experts and businesses to our country.
In a two-part blog published by PricewaterhouseCoopers early last year, we were warned that the factor that most correlated with “potential job automation” was the level of education of the employee. A number of members have already touched on that important issue.
Logically, that means that the best way to ensure security of employment and a prosperous future for our children is an appropriate education. In Scotland that might include a revision of what is being taught to children and teenagers and asking whether it is enough to keep up with what will be asked of them in the future. Is there enough on problem-solving or analytical skills, for example? I am sure that we would all agree that there is not. Classes involving Al should become as normal as English or maths in our primary schools, and that is a reality that we will have to face, sooner rather than later.
The impact of artificial intelligence should not be feared; it should be harnessed. We will have to ride this tiger whether we like it or not, so it is important that we understand it as fully as we can.
As New York University professor and Facebook’s chief scientist for Al research, Yann LeCun, said:
“Our intelligence is what makes us human, and Al is an extension of that quality.”
The potential and indeed current effects that artificial intelligence and its attendant advantages will have on the economy have been well documented and extensively considered.
The problem that we face in Scotland is that, although we have exceptional infrastructure for a great many things, as Robin Watson said just last week at the business in Parliament conference in this chamber, it was suitable a century ago. My Conservative colleague Alan Mak MP highlighted that the so-called fourth industrial revolution has the potential to add £630 billion to the UK economy by 2035.
However, that comes with a significant caveat: the considerable restructuring of an economy such as Scotland’s needs to be carried out in such a way that the 15 million jobs that the Bank of England has said may be vulnerable to the proliferation of Al are repositioned to make full use of its advantages. We cannot allow the interface of Al and data-driven technologies to become an enemy of a great swathe of our employment market.
Throughout the centuries, the United Kingdom has been at the forefront of adoption and has enhanced our economy as technologies and techniques have been invented and adapted. New developments do not have to affect people and economies negatively, although I accept that no Government has ever managed to get everything right all of the time.
In the 18th century, we saw the start of the extension of the right to vote to working people, and simultaneously embraced the leaps and bounds of steam power, assembly production and mechanisation in improving the lives and outcomes for working people.
We need to work on a cross-party basis, to ensure that reasonable scepticism about change to our economy does not prevent us from seizing with both hands new opportunities for innovation and progress.
Others have already mentioned the contributions of universities in Scotland, such as Heriot-Watt here in the Lothian region. Following their example, we should not think of the advance of Al as the enemy of employment but should work to ensure that it helps us to enable employment and the advancement of the improvements that we seek in the lives of the people of Scotland.
Data-driven technology can do much to help Scotland on its way towards, for example, a decarbonised and increasingly efficient energy supply. Marcus Stewart wrote in his report for the National Grid that smart devices and the internet of things have already made vast strides in preventing waste and allowing a more adaptable energy infrastructure across the country.
If the £13 billion of value that has already been referred to by others is to be fully recognised by the Scottish economy, it is of the utmost importance that we address the new challenges that are presented by this technology at every level, educational and professional, and that we recognise that Al and data-based work should not be resisted but embraced. Less well recognised and perhaps less well discussed or understood is the leading role that those technologies will have for consumers. I will again refer to the example of energy production, in which Scotland leads much of Europe in its use of renewables. Al and data will play leading roles in driving down the cost of energy for working people.
The interaction between Al, data and other subsets of technology with our traditional economy is only set to expand further in the years to come. I therefore welcome the UK Government’s commitment to bring forward a national retraining scheme with an initial commitment of £100 million, which was announced in last year’s budget.
We can move forward with Al and technology in a positive and constructive way, and Scotland can play a leading role if we get it right. We must resist Luddite tendencies, while continuing to eat our lentils, neeps and tatties, but we must move forward with AI and the new technology that we have.
When we think of artificial intelligence, we often think of synthetic life forms, such as the character Data from “Star Trek” or the Terminator, but in 2019, artificial intelligence, albeit in some ways still in its infancy, is continuing to grow and show us its potential to transform lives.
I will focus my contribution mainly on healthcare, because AI presents vast opportunities for healthcare across the globe, which I am particularly interested in as I was an operating room nurse for more than 30 years. AI is beginning to have an ever-more-significant presence in our worldwide healthcare systems—the minister has already mentioned out-of-hospital cardiac arrests. Researchers, doctors and scientists input data into computers, and the newly built algorithms can then review, interpret and even suggest solutions to complex medical problems. That reduces the human time that is spent translating data from such things as X-rays, imaging studies, magnetic resonance imaging and computerised tomography scans into results for clinicians to interpret as well as into language that is accessible for non-medical members of the public.
At the Massachusetts Institute of Technology, researchers have created gyroscopically actuated robot limbs that are capable of tracking their own position in three-dimensional space and adjusting their joints 750 times per second. In addition, they have developed bionic skins and neural implant systems that interface with the nervous system to allow the user to receive tactile feedback from the prosthetic limb.
When I worked in the operating room at Cedars-Sinai hospital in the United States, we used two surgical robots—da Vinci and AESOP—that were designed to facilitate surgery using a minimally invasive approach. That reduces post-op pain and leads to earlier discharge. Da Vinci was controlled by a surgeon from a console—they did not even need to be in the operating room. The systems tend to be used for prostatectomies and, increasingly, for cardiac valve repair and gynaecology surgeries. They are even now assisting with lumbar decompression and renal procedures.
There is a viral video of a da Vinci robot performing surgery on a grape, which is well worth a Google. With those robotic surgical systems, the surgeon does not necessarily need to be in the same location as the patient; the surgeon could be here in Edinburgh, and the patient could be at the Antarctic, or even on the international space station. It is all very sci-fi, and I absolutely love it.
The world’s population is rapidly increasing. Globally, the population is living longer, with more complicated and acute healthcare conditions, and more support is needed for people as they grow older. We want folk to age well, which means that a larger healthcare and caring workforce is required, and AI can help there, too.
Kenneth Gibson mentioned that Japanese developers have created robot companions that can interact with people. Other humanoid robots, such as the Care-O-bot and Pepper, are able to provide more complex and comprehensive care. Although robot pets obviously offer limited interaction, they have proved just as effective as real pets in reducing loneliness for elderly people in care homes. Robotic dogs and seals have been found to trigger conversation and social interaction, and to reduce stress and anxiety. Humanoid robots are already advanced enough to provide much-needed care to elderly people. Those robots can pick things up and move independently.
In addition to AI advances, if we are to meet the demands of the future, we need more people working in our NHS, and more people studying medicine and medicine-related degrees. We have seen a sharp increase in the number of people who study STEM subjects; people who are our scientists and inventors of the future. They are the kids who Alex Cole-Hamilton described earlier, when he said that the jobs of the future—which will be done by kids who are in school now—have not even been invented yet. They are the people who will, undoubtedly, be responsible for developing and progressing AI in healthcare.
While I encourage all to be open-minded about the potential of AI, I recognise the need to ensure that any approach to AI is carried out in a way that is underpinned by a proper ethical framework. I welcome continued debate on furthering that, and on any regulations that may be required. Many of those who are critical of AI claim that it will lead to job losses, with robots taking over, and I recognise those concerns. I am pleased that the Scottish Government has committed to investing in our Scottish workforce, ensuring that people the length and breadth of our country—from the Lochans in Dumfries and Galloway to Lerwick in Shetland—have equal access to education and training to gain the skills, knowledge and expertise to be adaptable to the changing employment opportunities of the future.
In the words of Mr Spock from “Star Trek”,
“change is the essential process of all existence”
We must embrace that change if we are to meet the demands of our future healthcare needs. I welcome this debate, and encourage everyone to share their views, especially on issues around promoting an ethical approach to developing AI. I would welcome the minister’s comments on that in her closing speech.
Thank you for making that invitation, Presiding Officer.
I welcome these debates for no less a reason than the one that Emma Harper just demonstrated: they provide us with an excuse to cite “Star Trek” without embarrassment. Patrick Harvie is laughing; however, I know that when he was describing his iPhone, he had in his mind a “Star Trek” data PADD.
Some of the dilemmas that are faced in science fiction are the very debates and dilemmas that we are considering today. However, they are also age-old dilemmas, because we have been facing the consequences of technology since we came into being. That can be seen from the invention of the wheel—a bit of technology that meant that we no longer had to rely on what we could lift on our back to carry items around with us—to the printing press, where a machine enabled us to print, almost instantaneously, a page that it would have taken a scribe an hour or so to produce. Then there is the computer, which used to be a person rather than a machine.
We have always had to deal with the consequences of technology change, and that technology change has invariably taken labour away from people and given it to machines. However, there is a difference now, and we need to be careful. Some people out there say that we have always had to deal with such change and that there is nothing new about it, but the pace and scope of the change are new. We have never before faced technologies that replace almost the entire supply chain or the complete scope of a human activity. That is the prospect that we are looking at with AI. We are looking at technology that has moved from robots that simply make widgets to algorithms that can analyse and plan, and we see jobs in accountancy and law, for example, being taken over by machines.
As well as looking at the what, we need to look at how the technology is replacing activities. Many members have talked about AI and automation, but nobody has really talked about machine learning. There are real challenges with machine learning. Artificial intelligence can learn to do things and carry out tasks very efficiently, but it cannot necessarily describe its own rules and algorithms, which it uses to do them. That is one of the defining aspects of machine learning. Previously, we were able to have accountability and to explain how things were done, but one of the key challenges with AI is that we might not be able to do that.
As many members have articulated, we need to ensure that we maximise opportunities. We need to minimise the impacts, but we also need to look at the new elements and issues that AI and machine learning throw up. Above all else, we need to ensure that we facilitate the transition. I will speak briefly about the three key elements that we need to focus on in relation to the transition.
My colleague Rhoda Grant outlined the vital importance of having a robust industrial strategy with AI at its core and serious investment at its heart. We should consider the industrial change that we have experienced in the recent past. We have got things wrong in failing to invest in new technologies. We lost heavy industries in Scotland because this country failed to invest in new technologies as they came in. That is why people lost their jobs.
Investments in much technology change, from GPS and satellites to the algorithms that allow phones to recognise people’s speech, were backed by state investment. We will be able to embrace the technology only by having a serious industrial strategy that is backed by state investment that can absorb the risks that individual companies cannot absorb.
Likewise, we must ensure that our people have skills. A number of members have talked about the skills that are imparted in school, for example. It is not just a matter of what skills our people have; it is also a matter of their ability to reskill time and again. It is critical that we stop viewing education as a linear pathway through life—a number of members have alluded to that. The reality is that, with the pace and nature of change, people will have to skill and reskill multiple times through their working lives. There cannot be apprenticeships that people can take only once in their career or undergraduate degrees that will be paid for only once. We need to look fundamentally at our education system to ensure that people can skill and reskill.
We also need to look at the impact on the state.
I have a question about reskilling and undergraduates. Does Daniel Johnson recognise that the Scottish Government has the Scottish graduate entry medicine programme for people who might choose to move forward on a different path?
That is a good example of facilitating reskilling, but it is one very small example. We need to embrace the fact that many people across multiple disciplines and professions may be faced with the need to reskill, and we need to facilitate that.
If we look at the challenges that we face, we see that the thought that we should somehow provide a basic preparation for the workplace only once in a person’s career is flawed.
We need to look at what we can deliver through public services. Patrick Harvie and Kenneth Gibson talked about how that can impact on healthcare, and Alex Cole-Hamilton talked about it in relation to the justice system. What can be delivered through AI is fundamentally different from what has come before. Another important issue is that of transparency.
I note the time, so I will draw my remarks to a close.
Ultimately, AI will bring about major change, and we will be able to embrace it only if we make the appropriate investments and provide the reskilling that our workforce will need in order to maximise the opportunities.
When I graduated from computer science—some time ago, it has to be said—computers of any significance were the size of a big room, and programmes were keyed in on punched cards while the user waited for the printed output in another room. Those were the glory days, indeed—but oh, how things have changed. They are changing so fast that we will need to be pretty astute just to be able to keep up.
Our smartphones are millions of times more powerful than the computers that took us to the moon and back. Al and data-driven technology are transforming the way we live, shaping business growth, transforming the skills base of our workforce and determining our place in an increasingly digitally advanced and data-driven global economy. Our economic and social future is largely dependent on our ability to innovate and to harness the advances in digital technologies. That is particularly the case with regard to the data revolution that we are in right now, which many members have referred to as the fourth industrial revolution.
What is going on, and what do we need to do to keep pace and stay in the game? There is nothing new about data—it has been around for millennia. However, what is different is the computing power to do something meaningful and helpful with it. Previously, we could only dream about being able to analyse complex data and do something with it, but we can do that now, and the computers are getting faster and faster.
Members might be aware of recent developments involving quantum computing, where experiments by Google and NASA appear to show that we are on the cusp once again of an incredible and astonishing jump in computing power. So, watch out for more on quantum supremacy, as it is called.
I have only got six minutes, but I will try my best. Quantum supremacy involves the ability of algorithms in computing to work incredibly fast, because of the nature of binary digits and their previous association in the zero and one state, and there being a mixture of that. It is about parallel processing, which has given us computing speed and power that is infinite when compared with anything that we have known before. That is my best guess as to what it is.
The clinical application of genome technologies, such as high-throughput sequencing and big-data analysis, is making a real impact in the fields of medical research, oncology and genetic disease diagnosis. A particularly active area has been the development of tools for tumour DNA sequencing and analysis. It is now possible to perform sequencing of tumour samples and identify the mutations in a patient’s tumour, thus allowing a precise diagnosis and the selection of the most appropriate therapy.
Robotic handling systems are enhancing productivity and defining quality systems and standards in manufacture. On a lighter note, there was even a Eurovision song that was totally created by Al. Maybe the UK should use AI programmers to try to avoid being last every year in the Eurovision song contest. Perhaps the recent winners always have been Al compositions—I think that we should be told.
However, seriously, the challenges are enormous and the Scottish Government’s move to develop an Al strategy is pretty fundamental now. It needs to embrace three key areas: assisting with research and innovation in hardware and software development; investing in and attracting the skills to deliver the aims; and, importantly, developing the ethical framework around all of this in order to protect individuals and businesses from the clear dangers that misapplication of the technology can create.
The prizes—and the risks—are substantial. There is an estimated £18 billion-worth of productivity and innovation benefits to be realised, as well as an additional £500 million a year in exports for Scottish companies that embrace data to enhance their operations. AI is fundamental to businesses’ ability to market and tailor products and services to new and expanding markets.
For Scotland to compete effectively in this global economy, it needs to widen its export and digital export base. This is perhaps an appropriate point at which to mention the possible spectre of Brexit and the negative impact that it could cause. Just when the European Union is developing its ideas and potential on the digital economy, the Tory party wants us to leave the EU and somehow set up our own digital market. It is a bit like trying to invent your own ocean. That is totally backward thinking, in my view.
The digital economy is a key sector in Scotland and, if we stay connected to Europe, we can share in that €400 billion-a-year economy, rather than walk away from it. Allied to all that, we need to build competence and confidence in how we apply the vast array of digital solutions in society, to promote growth and prosperity, to include those who are currently excluded and to protect peoples’ rights and freedoms.
According to the International Federation of Robotics, the number of industrial robots worldwide is predicted to double by next year, reaching a total of 3 million. That technology might match or even exceed human capabilities on tasks such as complex decision making, reasoning and learning, sophisticated analytics and pattern recognition, visual acuity, speech recognition and language translation, so the opportunities and the risks are great.
The task of government and industry is to prepare for that and to ensure that our people can be part of that revolution of change. It is within that dynamic context that the Government’s prioritisation of an Al strategy becomes crucial, and it should be welcomed. I am pleased and relieved to see the strategy coming forward. Our future as a technological, innovative nation depends on it.
In a former life I was a farmer and then an information technology consultant, specialising in wi-fi technologies. In the short time that I have been away from that industry, there have been massive technological advances, with solutions that we could only have dreamed of now coming to the market, which are delivering transformational change to the everyday lives of people across Scotland and around the world.
As we might guess, the fourth industrial revolution is the fourth major industrial revolution since the initial industrial revolution in the 18th century. It is characterised by a fusion of technologies that is blurring the lines between the physical, digital and biological spheres. Collectively, advances in artificial intelligence, data technology and 5G technologies are the foundations on which the fourth industrial revolution is being built.
The World Economic Forum has stated:
“We stand on the brink of a technological revolution that will fundamentally alter the way we live, work, and relate to one another.”
Like the revolutions that preceded it, the fourth industrial revolution has the potential to raise global income levels and to improve quality of life for populations around the world. Make no mistake: the fourth industrial revolution represents a massive and fundamental change in the way in which we live, work and relate to one another. It is a new chapter in human development, enabled by extraordinary technological advances.
The fourth industrial revolution is about more than just technology-driven change, however; it is an opportunity to help everyone and to harness converging technologies in order to create an inclusive, human-centred future. The real opportunity is to look beyond technology and to find ways to give the greatest number of people the ability to positively impact their families, organisations and communities.
We already have some fantastic projects in Dumfries and Galloway, including the work that is being done by Loreburn Housing Association and HAS Technology. They are changing lives right now, achieving results using advanced risk modelling for early detection, or ARMED, as it is commonly known. That involves a mixture of sensors and AI and helps elderly people to adopt technology that predicts the risk of falls and enables faster support. Over a six-month trial period, there has been a 25:1 save to spend ratio, with those utilising the technology having zero falls. We have a perfect example there of how artificial intelligence is already working to the benefit of people living in our communities.
I agree that that is a good example of a benefit, but does it not also raise the same questions around legal liability as self-driving cars do, for example? Does Mr Carson acknowledge that we have not yet resolved questions about legal liability when an AI system that has a person’s safety or wellbeing under its control goes wrong?
I thank Mr Harvie for that valuable intervention. We have discussed such issues previously, but we can often assume that the horse has bolted when it comes to data protection and the legal framework around technological advancements. Technological change is moving so quickly that it will be difficult to stand still and take a look at the direction that we are travelling in. We therefore need to look at the legalities around AI and the protection of the individual when it comes to data.
However, the example of AI that I highlighted demonstrates how data can play a significant and positive role in facilitating healthy ageing and independent living. There is a huge recruitment problem for healthcare workers in my constituency, but that type of innovation can bring the biggest benefit and, arguably, the biggest bang for the taxpayer’s buck in delivering an intervention in remote and rural areas.
I am aware that the Data Lab has been working with the public, private and voluntary sectors countrywide to help realise the other benefits that AI can bring to Scottish healthcare. The cancer innovation challenge is an example of how AI and data science can benefit Scotland’s medical profession and patients in a project that encourages partnership working to help people with cancer by looking at variations in data. Machine learning is driving a revolution across a great number of fields by unlocking the predictive power of larger data sets.
We welcome those huge advancements, but if future technology transformation is not managed appropriately, rather than tackling a lack of equality and equity in healthcare, it could lead to greater inequality in rural areas. We have also heard about AI’s potential to disrupt labour markets. Indeed, it has been estimated that AI could lead to the creation of 558,000 jobs in Scotland but also the displacement of 544,000 jobs. That will probably be the biggest shake-up in a lifetime in Scotland’s labour market.
As a rural MSP, I believe that inequity represents the greatest societal concern surrounding the fourth industrial revolution. My constituents in Galloway are experiencing that inequality right now because of connectivity or, specifically, the lack of it. Sadly and concerningly, even at the most fundamental levels, the Scottish Government is failing. People in rural areas are fed up with figures being quoted about superfast connectivity when many businesses there do not even have access on the ground to reliable basic broadband. The Government has pledged to deliver 100 per cent superfast broadband by 2021, but the R100—reaching 100 per cent—programme is stalling. Never has there been a scheme that better illustrates that the Scottish Government is more about grandstanding than delivering.
We must do so much more to help Scotland’s digital technology sector right now to do everything possible to avoid a widening digital divide across the country which, if not addressed urgently this time round, could have a devastating impact on rural areas. The Government must go further and faster. I welcome the development of an AI and data strategy and I sincerely hope that, for the sake of Scotland, the Government turns around its reputation for being tired, stale and out of ideas. The Scottish Government needs to address the concerns of the Fraser of Allander institute, which in an economic commentary said:
“In 2007, the Scottish Government set out a new approach to policy centred upon a single economic strategy which all public sector initiatives were to align behind. But over the past decade, this clarity of focus and delivery has arguably been lost”.
A fit-for-purpose AI and data strategy is critical for Scotland’s economy, including the rural economy. With the right strategy, Scotland can be at the forefront of this fourth industrial revolution.
I will respond to some parts of the debate, but particularly to the remarks that we have just heard about the R100 programme. It might be worth reminding members that, in schedule 5 to the Scotland Act 1998, the reservations that we are not responsible for include two specific areas: telecommunications and internet services.
Therefore, where we are moving ahead to implement high-speed broadband in every premise in Scotland that wants it, we are—
The member will not.
Let us move on to—to be blunt—more interesting things and talk about quantum computing, which Tom Arthur raised. It is related to the quantum excitation of the Higgs field, which affects the operation of the Higgs boson. The Higgs boson is a particularly interesting sub-atomic particle with a spin of minus one half, which has a referential between two instances at a distance that is not constrained by the speed of light—it is a unique particle. There is a connection to Edinburgh, in that Professor Higgs is from here.
Artificial intelligence sprang from the work of Professor Wolfson at Heriot-Watt University in the 1970s. At the weekend, I tried to find my book on that, which is somewhere in a box in my garage, but I just could not find it. He designed a manual computer constructed of matchboxes that was a self-learning machine. It was mechanical, not electronic, and a very interesting thing it was, too.
Daniel Johnson might care to note that algorithms have been around for a while. The first algorithm was created by Ada Lovelace in the mid-1800s.
The debate is not about artificial intelligence but about artificial learning—that is just a quibble that I have. Intelligence is about being able to invent and learning is about being able to innovate; computers can innovate, but I am not at all sure that they can invent.
A lot of the debate is about data and some concerns about data are not particularly new. I will quote that most reliable of sources: myself. Forty-five years ago, in a talk that I gave, I said:
I was not alone in saying that 45 years ago, and many of the things that we are discussing today are not particularly new.
The power and ubiquity of computers are having a profound effect on many parts of our economy. It is just another industrial revolution, which will eliminate some jobs and create many more, as previous revolutions have done. China and the US are probably the leaders in that. The US is a country that is pretty good at creating companies and individual wealth—we can debate that on another occasion—and China is a great technological innovator. Those are complementary strengths.
I very much welcome the fact that the UK Government has produced an AI sector deal—leaving aside the fact that I do not think that it is AI. It is interesting that the companies referenced in the AI sector deal are almost all companies that have come to the UK. That is good and they are welcome, but the intellectual property that comes from that effort does not remain in the UK; rather, it is to the benefit of jurisdictions elsewhere. We certainly have to step up to the mark in improving our education system.
Emma Harper was correct to talk about the use of AI in health. It is important to make services more cost effective and to improve patient treatment and outcomes. We will be able to speed up diagnosis by learning from information available from diagnoses that were previously made by humans. Automating the process will speed things up, but it is important that we leave the oversight and responsibility with humans.
The Industrial Centre for Artificial Intelligence Research in Digital Diagnostics was launched last year at the University of Glasgow, so that is Scotland’s contribution to using AI in a way that will benefit society as a whole.
We will have new tools. We will be able to deal intelligently with the huge challenge of climate change; AI can help us with that. Leaving aside autonomous vehicles, AI in vehicles is already reducing the consumption of fuel, by helping them to use it in a more intelligent way. Public transportation can be improved by the application of AI in individual vehicles and in controlling and making better use of the network.
AI amplifies human skills; it does not replace them. Our job is to ensure that we always know where the data that we are using has come from and that we protect it. We must always ensure that the paramountcy of the human being remains.
I am not an expert on the issue, but we all recognise and welcome the ways in which technology can improve our lives. We have heard some of those this afternoon. This growing industry offers job opportunities in Scotland, but trade unions and equality organisations are raising concerns. As we go forward, we must listen to those concerns.
There are issues to be addressed to do with the collection, sharing and use of data as well as the consideration of how, if we use it positively and responsibly, artificial intelligence can enhance lives.
One practical example of a positive use is the charity Royal Blind recognising the importance of AI and data sciences in diagnosing eye conditions, such as age-related macular degeneration. It points out that the collection of large-scale data and advanced analytics could become an invaluable way to fight sight loss. Free eye tests provide access to the required large-scale data, but patients must always be involved in the decisions about how that data is used. There is a common fear of not having control over artificial intelligence. However, experts are clear that, as humans author AI, whoever owns it is responsible for what it does. Systems can be engineered for accountability. If, or when, they do not behave properly and discriminate, the humans behind the systems are at fault.
I turn to equality. There is a serious and recognised problem with the underrepresentation of women at all levels of the tech sector. In her opening remarks, Rhoda Grant mentioned the workforce. Prioritising the training and recruitment of women as AI specialists would reduce the number of platforms and systems that are designed by men, who are ignorant of or indifferent to women’s lives.
In the 1960s, when I started in IT, the workforce was 50:50 men and women. One academic paper shows that, when the BBC computer was introduced in the early 1980s, more men started going into computing because parents gave the boy the computer and not the girl, so there are probably cultural issues as well as technical ones.
I will expand on that issue. I do not have the knowledge that Stewart Stevenson has of what has gone on in the past.
Caroline Criado Perez has brilliantly described how the world is designed for men in her book, “Invisible Women”—from voice recognition systems that recognise only the lower tones of male voices to systems that are developed to select the best candidates for job interviews. If those systems are not properly and responsibly developed, they will discriminate. The sophisticated algorithms will choose CVs that are similar to those of previously successful candidates. Often, those are men. There have been examples of that happening. In other words, an industry with too few women will remain that way.
When researching the results of searches that are associated with certain professions or roles, a shocking gender data gap was found. The words “woman” and “girl” become commonly associated with family and “men” with career. Cultural and gender stereotypes are already present within AI. Therefore, not only do we have to safeguard future use, we must now call to account all the tech companies that collect data and provide us with answers when we go online. If we search for a picture of someone cooking, the results will predominantly be images of women. If we search for an image of a computer programmer, the majority of the results will be images of men. Gender stereotypes are further perpetuated by software developers, who create care bots and customer service bots that look like women. It is vital that organisations that are developing software demonstrate that diligence has been applied in the creation of that software.
Large corporations should be called to account for the proper functioning of their AI systems and full disclosure must be made of the systems and algorithms in play. It is also vital that AI systems be designed in a way that respects the rule of law, human rights, democratic values and diversity, and they must include appropriate safeguards. In fact, it is imperative that those basic principles are guaranteed.
There are other, practical, considerations that require attention from the Government. As Rhoda Grant said, Labour has a comprehensive industrial strategy that would address the needs of the data industry, and AI in particular. Making progress in the field will depend on the fast roll-out of next-generation broadband, investment in infrastructure and investment in a highly skilled workforce. However, the Scottish National Party’s previous election promise to deliver 100 per cent superfast broadband by 2021 looks unlikely to be met and I will be grateful if the minister will comment on that in summing up.
Scottish Labour already recognises the implications of the changing labour market. Jobs will be created, but they will also be displaced—a point that was made by others during the debate. Those workers deserve a just transition, where workers benefit from the advances in technology. As Frances O’Grady, the general secretary of the Trades Union Congress, said,
“It’s time for working people to share in the benefits of new technology. That’s why unions have been arguing for less time at work, more time with family and friends and decent pay for everyone.”
In closing, I want to quote from Caroline Criado Perez’s remarkable book, “Invisible Women”, which I mentioned at the start. It highlights the many ways in which women are forgotten, particularly with the use of AI.
“Imagine a world where your phone is too big for your hand, your doctor prescribes a drug that is wrong for your body, where in a car accident you are 47% more likely to be seriously injured. If this sounds familiar, chances are you’re a woman.”
I welcome the opportunity to contribute to this important debate on artificial intelligence, data-driven technologies and the opportunities that they present for the Scottish economy and society.
This Parliament has allowed me many opportunities to speak on a whole host of topics and to learn a great deal as we move forward as a country. When I was elected to this place, my daughter encouraged me to take up Twitter. I was not keen at first, but I enjoy sharing the work of Parliament, showing the events that I attend and seeing many moments that my colleagues share, from new births to birthdays and much more.
That technological advance was new to me. I thought I would get 50 followers; I actually have more than that. My staff even joke about me now using contactless payments—another technological change in our modern and evolving world. On my phone I can even see my bank account. No, I cannot see yours, Presiding Officer.
Today’s debate is an opportunity to focus once again on an area of exciting innovation and development. We have an opportunity to address the prospects and challenges of artificial intelligence and data-driven technologies, in order to ensure that they benefit the Scottish economy, improve productivity and efficiency and attract new business and jobs.
Artificial intelligence has a genuine potential to greatly benefit the Scottish economy. According to a PwC report from 2017, UK GDP will increase by up to 10.3 per cent by 2030 as a result of artificial intelligence, amounting to an additional £232 billion. The impact of AI across industrial and commercial activities in the UK could boost Scotland’s GDP by up to £16.7 billion by 2030, which is 8.4 per cent of GDP. The boost is the equivalent of around an extra £2,000 of annual spending power per Scottish household—a genuinely staggering figure.
The report on “The Value of Big Data and the Internet of Things to the UK Economy”, published in February 2016 for Scottish Enterprise, suggested that data innovation could potentially benefit Scotland by £20 billion, and it has been suggested that £1 billion in public sector efficiency savings are possible annually through the better use of data. It is clear that a wealth of opportunity is available through AI.
That is why, as it has set out in its latest programme for government, the Scottish Government is taking significant action to ensure that technological change can address challenges and lead to economic opportunities. As the programme articulated well, Scotland is rich in data, and our public sector holds an immense amount of information that could be transformed for social and economic good. The programme stated that the Government
“will shortly issue a call for artificial intelligence projects to help us tackle complex issues, such as climate change, awarding grants of up to £100,000 to foster new ideas and develop practical solutions.”
It also set out a range of actions that will create the conditions to enable industry and public services to innovate with confidence, encourage inward investment and give people reassurance that technological advances will benefit Scotland socially and economically. Those actions will include developing an AI strategy that will ensure that Scotland maximises AI’s potential economic and social benefits. They will also include launching next year the new research data Scotland service, to provide support for researchers to access and use data and, crucially, to commit to targeting high-unemployment, low-productivity sectors to support them in adopting and embedding digital technologies.
Scotland has a proud history of invention, innovation and technological advance, so it must see itself not just as a user or consumer of new technologies, but as the inventor, designer and manufacturer of them. To ensure that the Scottish workforce has the requisite skills to thrive as technology advances, the Scottish Government is working to ensure that the planning and commissioning of its annual £2 billion investment through the Enterprise and Skills Strategic Board are better co-ordinated and more responsive. It will continue to encourage students to pursue science, technology, engineering and mathematics careers through careers advice and guidance in schools and through the developing the young workforce programme. I welcome recent campaigns to encourage more young women to pursue careers in STEM areas, which are vital pieces of work to increase diversification in our sectors.
Beyond that, the Scottish Government is investing strategically to make Scotland the best place for data innovation and Al, including £13.5 million for phase 2 of the Data Lab, which was announced in September 2018. To encourage digital adoption across sectors, over the past six years the Government has invested £25 million to support businesses in transitioning to digital and to encourage a pipeline of digitally skilled workers. It has also supported projects via its £13 million investment in Skills Development Scotland’s digital skills investment plan.
As I come to the end of my contribution, I wish to recognise that as well as the opportunities that AI presents, there are concerns and challenges around its potential to lead to job losses and so on. I know that the Government’s strategic labour market group will continue to provide advice on a range of matters, including the challenges of automation. It will work in conjunction with industry and academia to gain a full understanding of future technologies and to make informed judgments about the move to greater automation in the market and the introduction of artificial intelligence.
Although the use of AI raises ethical issues, the Scottish National Party wants our country to continue to lead in the safe, secure and responsible use of data for social and economic benefit, putting people first and securing their support and trust. I stress that we should put people first, because that is the Government’s priority on this issue and much more.
I thank Kate Forbes for bringing the debate to the chamber in Government time. We have heard a very informative and engaging series of exchanges this afternoon. I also recognise the work that, as minister, Kate Forbes puts into the area. There is rarely a day when I do not see in my Twitter feed how, through one activity or another, she is engaged in promoting the Government’s work. It is great to see that AI is such a central part of the Government’s thinking and strategy and that the Government recognises both the opportunities and the challenges that AI presents.
I was wondering whether artificial intelligence would have been useful today, as it would perhaps have allowed AIs to deliver speeches in the chamber and allowed us to do what we are all actively thinking about and be out campaigning. Alas, we are here and contributing to the debate—as we should be.
There have been a number of threads to the debate, and I will touch on three if time allows. The first concerns work. There has been a great deal of discussion about the potential for job displacement. It is right that we consider the implications of that, but it is equally important to consider the potential for job augmentation. That is where, rather than a job being displaced, new technologies that are data driven, such as artificial intelligence, increase productivity and, as such, enable people to complete tasks more quickly than would otherwise be possible, increasing the free time that individuals have or giving them the opportunity to progress to other tasks. It also allows people who cannot currently complete tasks through lack of skills or training to complete those tasks.
When we think about retraining and skills, it is important to note that AI will allow people to do things that they cannot currently do. That is significant because the debate that we are having on AI and how it will affect jobs leads to the larger question of the world of work and what the purpose of work is. My view is that, since the first industrial revolution, we have made significant progress in reducing the time that people need to spend selling their labour in order to earn a wage and be able to live. AI offers us an opportunity to further decrease the hours that individuals have to work each day and the years over a lifetime in which individuals have to work, freeing up that human capital for other, more socially productive uses. We must factor that in when we consider what AI could mean for work—not just individual jobs, but the collective experience of work and its role in society.
Another area that is worth touching on is the implications for ethics, and specifically for our democratic process. A number of speakers have touched on that. We are aware of the potential of algorithmic learning to target and profile individual voters based on their social media activity and their consumer activity online, which can consequently be used for specific messaging. We will see that develop further. We are on the cusp of deepfakes, whereby convincing multimedia content that is indistinguishable from authentic content is presented on social media—or other types of media, for that matter—broadcasting fraudulent messages and content to individuals. That poses a genuine threat to our democratic process, so we need to have a robust set of rules and regulations in place to address it.
The final area that I will touch on picks up on what Patrick Harvie said about outsourcing our human agency. The process of increasing automation and the use of AI is a gradual one that is slowly but surely percolating into all aspects of our lives. Patrick Harvie made the point that, with the mobile phone, which would have seemed almost magical only 30 years ago, we can contact any individual on the planet, but it is not just any individual—it is any thing. From my phone, I can turn on the light in my living room and monitor my smoke alarms. With many people using smart heating, we now have systems that are examining and learning from people’s habits. When they turn up or lower the thermostat, the machine learns from that and it can then make predictions about their habits. The control of that information is important, but I make the broader point that we will gradually cede more and more of our executive decision making to machines. Driverless cars will be a watershed moment when we place our personal safety and that of our families in the hands of a computer or an artificial intelligence.
As we gradually cede more of our executive decision making, I am concerned about the broader implications for society. With the calculator, we outsourced our mental arithmetical capacities. With more advanced AI, will we start to outsource our decision making and thus become more reliant on machines to take decisions of greater significance for us.
As human beings, our intellectual hardware—our brains—evolved for the circumstances of southern Africa 200,000 years ago, but we are now confronted with the advanced technology that we have now. There is therefore a danger that as we outsource more of our executive decision making to machines, and because, as human beings, we have a propensity to seek counsel in the numinous and the mystical, we will become more reliant on these machines. That will have significant implications for political decision making and more broadly for the decisions that we take collectively as a society and as individuals.
Perhaps inevitably, as the member who lodged an amendment that highlights some of the potential downsides or risks in this field, I have been cast by one or two members as the doom-monger or the Luddite. That could not be further from the truth. I am not convinced by Aldous Huxley’s argument that
“Technological progress has merely provided us with more efficient means for going backwards.”
I reject that idea. I am not even as cynical as George Orwell, who said that progress is not an illusion; it happens, but it is usually disappointing. I am not that cynical. I am a fan of the smart thermostat and I am glad that I can tell my heating to come on, but I do not want someone else to get in there and do it for me.
I say to Daniel Johnson that I am unashamedly a science fiction fan. Science fiction is often better than politics at prompting society to think about these questions in advance and explore what they mean for society.
I appreciate it, especially as I did not speak in the debate.
At a ScotlandIS event a few years ago, I was fortunate to see Vint Cerf, Mr Google and father of the internet, as he is also known, talk about his concerns about being fitted with a heart monitor. His questions were exactly the same. How is it done? How is it controlled? Who can get access to it? I absolutely share Patrick Harvie’s concerns. If the father of the internet is worried about such things, they should be on everyone’s radar.
I appreciate it. As we all, knowingly or unknowingly, give more intimate information about our lives and even the inner workings of our bodies to these technologies, there is a danger that we will surrender some control.
Finlay Carson and Emma Harper spoke about some of the positive opportunities for technology to be used in personal care. I am absolutely not trying to say that we should put the genie back in the bottle and not explore those positive opportunities. However, it was interesting that in her speech, Emma Harper made some science fiction references and the two characters whom she cited were Data and Spock—not characters whom most of us would expect to be imbued with the empathy and compassion that we would want in those caring for us. The technology has potential but, at the moment, it lacks those characteristics.
Indeed, and I look forward to his return to our screens in the new year. However, let us acknowledge that we are talking about the application of technology that has not yet reached the level of making intelligent and informed decisions that we would expect a human being to make, let alone an empathic or compassionate one.
Rhoda Grant mentioned the use of Alexa, the Amazon smart speaker device, to empower someone in a new way that would not have been available to them in the past. Yes, those capabilities exist, but as they become ever more sophisticated, we will all need to be experts—at least by today’s standards—if we are to retain some control.
The centre for data ethics and innovation has been mentioned. Its paper, “Smart Speakers and Voice Assistants”, recognises that
“voice assistants provide platforms with new troves of data which they may potentially use to profile customers in new ways—such as analysis of sentiment or even aspects of mental health. The extent to which this occurs is opaque.”
As Elaine Smith said, there are gendered aspects of that use of technology. Many of the devices are better at recognising commands in stereotypically male voices. Furthermore, most of them are constructed with feminised names and stereotypically female voices, cast in the role of the subservient system. There are serious gendered aspects of the use of those technologies, but they are not inevitable—we can make choices about that.
Tom Arthur mentioned deepfakes. Another of the CDEI’s papers is about deepfakes and shallowfakes. In the past few days, members may have seen a video showing Boris Johnson and Jeremy Corbyn endorsing each other for the role of Prime Minister. It is a convincing and compelling piece of work. That took a bit of effort and resource to construct with today’s technology. The CDEI’s paper recognises that
“Deepfakes are likely to become more sophisticated over time” as the technology becomes more accessible, but
“even rudimentary deepfakes can cause harm”.
“Legislation will not be enough”, and suggests the use of
“new screen technology” in order to identify them. Think about that. We will be asked to rely on artificial intelligence in order to recognise the manipulation that has been produced using artificial intelligence. Furthermore, we will be expected to do that without suppressing benign, creative uses of the same technology for innocent entertainment purposes. So, there are questions about how we can even achieve the objectives that we are setting with the existing technology.
Others have mentioned jobs and the question whether we want people with the skill to develop those technologies and add value or, as I think that we increasingly need, people with the skill to understand and resist them. With deepfakes, we are entering a period in which we not only need the ability to recognise that the information that we are being bombarded with might be suspect or manipulated. We will be in a situation in which no one can look at anything on a screen without assuming that it might be no more real than an android’s dream of electric sheep. That is a profound change to how people receive information and how people understand the sources of truth in which they can have confidence.
We know that big employers such as Amazon are desperate to start automating their recruitment processes. Its first attempts have again shown that women and people with minority ethnic-sounding names are likely to be discriminated against.
We have a serious set of risks. We recognise that an ethical framework is necessary, but we all need to recognise that that is not in place yet. We do not even know what it will look like and we do not have the regulatory tools that will be necessary to enforce it.
It has been an interesting debate. As many members have said, we are perhaps on the cusp of the fourth industrial revolution. That provides opportunities, some of which have been discussed. However, technologies such as we are debating also present a number of key challenges, especially to do with women, the labour market, inequality, skills, learning and—as Daniel Johnson said—the sheer pace of change that we will have to keep up with.
Almost all members in the debate have mentioned education, skills and making sure that Scotland is ready and equipped for the constantly changing data and digital industry. That should be at the forefront of any industrial strategy. The minister acknowledged that and the need for skills training when Daniel Johnson pointed out the decrease in the number of young people who are leaving school with STEM qualifications. Daniel Johnson also spoke about reskilling continually through life—a point that was also made by Alex Cole-Hamilton, who said that children who are born today will work in jobs that are currently unknown. Rather than attracting the skills to Scotland, we need to grow our own. As Dean Lockhart said, to do that we need to deal with the underinvestment in education and lifelong learning.
Elaine Smith spoke about gender issues in AI. Such technologies present a number of key challenges to do with women in the labour market through their inbuilt discrimination and bias and the skills challenges that they present. The way that the Scottish Government responds to the opportunities and challenges will determine whether AI will sustain or challenge women’s inequality. We know that women are underrepresented: that will be amplified if we continue to allow that imbalance. It is vital that the Government adopt a gendered approach, including using gender-disaggregated data and engaging with gender equality organisations when it is drawing up policy. A world that is designed by men favours men, as was starkly pointed out by Elaine Smith when she quoted Caroline Criado Perez, who makes for very interesting reading on the subject. I recommend that members look at what she writes.
A number of members talked about the application of artificial intelligence. Gordon Lindhurst talked about smart meters and how we can use artificial intelligence to tackle climate change, which is hugely important. Elaine Smith, Emma Harper and Willie Coffey talked about AI’s application in surgery and medicine; there are cutting-edge technologies there that can change people’s lives and make life easier.
Elaine Smith and Finlay Carson spoke about broadband. Meeting the demand for broadband and superfast roll-out will be the very platform on which a lot of the technology sits. The SNP seems to have abandoned its election promise to deliver 100 per cent superfast broadband by 2021, because no contracts have been signed and no minister has reiterated that commitment. We need to make sure that the technology is there, because it is the road on which the new technology will run. If we do not, communities will be left behind because they cannot access it.
A lot of members talked about how we access and use data and the need for ethical standards on access and use. Patrick Harvie talked about the speed of change and how difficult it will be for ethical standards to keep up, especially when we need to regulate the technology globally. That is a challenge that we need to grapple with. If we cannot do that globally, then we need, within Scotland and the wider UK, to make sure that we apply ethical standards to the use of data here.
We need to make sure that people know about the use of their data. How we use a person’s data should be down to them, as Elaine Smith said, not down to people who seek to make a profit out of it. We need to be very clear on that. The situation is starkly shown in the use of social media. Social media technology is great: we can share information, but we have seen very quickly how it can be abused. There are lessons to be learned from past harvesting of data for commercial gain.
Our amendment talks about an industrial strategy. We need that to bring together all industrial levers, including AI. The strategy needs to bring together not just AI’s development, but its commercialisation and, indeed, the investment that is required in education and lifelong learning to make sure that we benefit from it. AI is not going to stop: it is with us and it will not slow down, so we need to catch up and to put the right framework in place to maximise the benefits and minimise the challenges. We need to equip our workforce for the changes that are yet ahead.
Today’s discussion has been largely positive, and there have been some interesting contributions from across the chamber. AI and data have the ability to change our society, to provide benefits and to grow our economy, but they also have the potential to disrupt.
As Dean Lockhart said, according to PwC, the UK economy is likely to grow by an additional 10 per cent by 2030, entirely because of Al. Its impact will be significant, so it is vital that we support our digital technology sector, not only to create useful technologies, but to create jobs in Scotland and across the UK.
It is welcome that the Scottish Government is looking towards preparing an Al strategy, but its approach must be coherent. We can look at what the UK Government is doing. The UK’s Al sector deal has been mentioned by several members. It is clear that it is an ambitious investment in the success of the sector, as part of the UK industrial strategy. The UK Government’s office for artificial intelligence is already working with industry and responding to the sector’s needs, because AI is, increasingly, not the technology of tomorrow, but the technology of today.
Our relationship with technology has changed fundamentally. In the past two decades what we buy, what we eat, where we travel and even our inner thoughts have become digitised.
Other countries have already seized the opportunity of a leg-up in the Al and data stakes. For example, we can look to the United States and, in Asia, to China, South Korea and Taiwan. Those hubs of innovation have grasped the thistle, while it seems that we are only beginning to discuss those issues now. As Dean Lockhart reminded us, that is why the Conservatives have called for an institute of e-commerce in Scotland to help businesses to tackle the digital gap between Scotland and other countries. However, it is also necessary to ensure that we work with the progress that the UK Government has made in such areas, rather than duplicating or opposing it.
At its core, the debate is about innovation—an area in which Scotland has struggled to make serious improvements, despite our advantages and the excellence that can be seen in, for example, our university sector.
As has been highlighted by a number of members, an area that has certainly been neglected is digital skills. Our education system has been ill-equipped to provide the useful adaptable skills for the future that people will need. Many reasons can be given—not enough money and specialist teachers, for example—but as with everything, such investment must be seen as an investment in the next generation, for whom data and Al will be important parts of their world and its economy.
I have spoken in the chamber on several occasions about enterprise, small businesses and start-ups, and about building a more flexible economy in which innovation can flourish. Unfortunately we still do not see any real signs of change.
We should also consider the impact of Al and data tech on our labour market. As others have said, there will be disruption. In previous debates, we have heard about automation, which has real potential to render existing jobs redundant. Al entails similar risks, as Alex Cole-Hamilton and Gordon Lindhurst said.
I am more optimistic, though. Advances in technology tend to increase wealth and improve living conditions, rather than reduce them. However, in order that we can transition fairly, a sensible strategy must look at the Scottish Government’s approach to retraining and reskilling. It is positive that the Government is finally looking at lifelong learning and mid-career reskilling, but those will be meaningless without real outcomes.
A strategic look at Al and data will require direction in all those areas, and wide engagement will be essential. We should also consider that data-driven technology brings new concerns. The amount of personal data on every citizen that is now itemised and stored is vast. That brings new challenges for the Government, not only in terms of regulation, but in how we use the “immense amount of information” that the Scottish Government boasts about holding. It is a short jump from questions of economics and public service to more fundamental ones of civil liberties, choice and consent.
We have seen in other countries how data can be used illegitimately against populations. What might be a driver of progress in one instance can just as easily be put to use as a cudgel in another.
Earlier this year, the minister, Kate Forbes, spoke to the Royal Society of Edinburgh about public buy-in to discussions about AI technology and its use. I agree with her. However, that conversation cannot be one-sided or tend towards a particular outcome.
I will turn to some of today’s speeches. The minister touched on how AI will impact on the lives of our constituents, boost the economy and create jobs. She spoke about the ethical use of AI—I agree with her on that—and she mentioned a number of organisations that the Government is working with, but she did not mention the UK Government. It might have been included in her phrase “and beyond”, but I hope that she will speak about that when she sums up.
Dean Lockhart spoke about how cross-collaboration might be vital to success, and about the digital gap and our need to overcome it if we want AI to be successful. Stewart Stevenson spoke of the need for data to be fully representative of society, and I think that he volunteered himself as somebody who will provide his services to the boffins, if necessary.
Gordon Lindhurst spoke of the opportunities to ensure that energy use is more efficient, and about how AI and data can play a role in reducing our energy costs. Emma Harper spoke of the role of AI and data in health and social care and in combating loneliness, which is important. Finlay Carson also highlighted the potential and existing role in the care sector, and the potential for inequality in rural areas and the need for infrastructure to be in place.
Rhoda Grant spoke about the importance of lifelong learning, as I did, and she and Elaine Smith spoke about the underrepresentation of women in the sector. They were absolutely right—we cannot afford to leave anyone behind.
Willie Coffey gave a very interesting explanation of quantum computing. I have no idea whether any of it was right—it is not my area—but it was extremely interesting. I will pick him up on one thing, though. The UK has come last in Eurovision only twice in the past 10 years, not every year, so I hope that he will not use that inaccurate information again.
Patrick Harvie and Emma Harper raised a number of important issues, although the debate turned into a bit of a “Star Trek” convention at one point. It made me think of the good old days of the Standards, Procedures and Public Appointments Committee and the “Star Trek” chat that we used to have.
The Scottish Government’s proposed Al and data strategy clearly has potential, and a positive approach to innovation is important. The expansion of Al through our economy and public services is as assured as any future trend can be. However, the potential for us to benefit will be realised only if it operates in conjunction with the wider UK approach to the technologies, respects the citizen and addresses some of Scotland’s underlying problems with enterprise, skills and work.
It is good to hear that, across the chamber, we believe that Scotland is well placed to harness the potential of AI to benefit our people, our economy, our public services and our society more generally. We can draw on our many strengths as a country, our world-leading universities and our vibrant tech start-up scene. Edinburgh is recognised as one of the best places in the UK to start a business; some have already grown into billion-pound unicorns.
More fundamentally, our people and our history are as innovative and forward looking as ever. However, AI is a global race and we need to understand the opportunities and the challenges in that context. It is about leadership and our place in the world. The US leads the pack, China is making massive investment to catch up and the European Union is co-ordinating and funding the effort of its member states.
The question for all of us is how we position this country and ourselves so that we can influence the development and reap the benefits. We want to be both ambitious and pragmatic, rising to the challenges of AI to realise our vision of a human-centred AI strategy that people can trust and which puts people first. We have to see ourselves in that international context; there is an international dimension, but that does not mean that we cop out of our own responsibilities to understand what the risks are and try to deal with them and to work across borders to manage and work out the way forward.
In the context of the point that was made by the last Conservative speaker, we want to collaborate—and are already collaborating—on a UK-wide basis, and I have had a number of engagements on that basis. Speakers have mentioned a number of issues to do with skills and training, the future of work, innovation, ethics and algorithms.
I will start with skills, because STEM education needs to be, and is, a priority for this Government, as evidenced by the good progress that we have made on our ambitious STEM education and training strategy, which was published in October 2017. That strategy sets out a vision of a Scotland where everybody is encouraged and supported to develop their STEM skills throughout their lives to improve the opportunities for all, to meet the employer skills requirement and to drive inclusive economic growth.
I want to explore the concept of no one being left behind by AI and the issue of the STEM agenda. During the debate, a number of members raised concerns about the decline in the number of maths and computer science teachers, and about the cuts to thousands of STEM-based college places. Does the minister recognise that that will result in people being left behind? What measures will the Scottish Government take to address those cuts?
I do not accept that, because we are taking measures on those areas. Dean Lockhart is right to identify that teacher numbers are one of the biggest challenges. That is why the Deputy First Minister launched the career-change bursary to encourage more people who currently work in industry to go into the teaching profession, which has been relatively successful.
On the point about college places, it is about what kind of skills we produce. We have invested £25 million over the past six years to support businesses to transition to digital, and to develop a pipeline of digitally skilled workers. That includes investment in CodeClan, which Dean Lockhart will be familiar with and which is an industry-led digital reskilling academy that already has more than 800 graduates. It is about making sure not just that we have the people with the right skills but that we equip people to reskill in a way that works for them, because AI means that the future of work will change.
Some members quoted PwC statistics that recently forecast that AI could eliminate half a million jobs in Scotland over the next 20 years but simultaneously create enough new jobs to result in a net increase in employment through jobs that will require new skills. We have to equip people in different ways and address the gender gap that has been identified.
Others talked about innovation, and we need to consider how we create the right environment for data-driven innovation in Scotland to ensure that we are an inventor and a producer, and not just a consumer, of AI. That means that, if we are to retain jobs and profits in Scotland, our academic institutions need to continue to be world leading in AI research, and our businesses need to be quick to access the skills and the capital that are necessary to develop and adopt AI solutions. We have to show the world what we already know—that Scotland is an outward-looking and welcoming nation and a great place to do data-driven innovation.
I think that everybody who spoke in the debate mentioned ethics. We heard loud and clear that the view of the chamber is that there are some concerns around the ethics, which is understandable. It is important that we stress the importance of putting people first, using AI ethically and making sure that our privacy and human rights are respected and that we are treated fairly and equitably. As I stated in my opening remarks, we intend to develop a strategy that has benefits to the citizen as its core guiding principle and which is aligned to the national performance framework.
There are also valid concerns, which were raised by Elaine Smith and others, about the potential impact of AI on jobs, and on economic and social equality in Scotland. I am fully aware of those challenges, and it is vital that we ensure that our people gain the skills that they need not just to work in these areas but to cope with the changes that are wrought by this revolution and offered by AI.
At the heart of it, we want to know how AI affects us as citizens. We know that data-driven technologies have been misused elsewhere to interfere with democratic processes by spreading misinformation through online targeting. It is critical that we understand and address those risks and that our institutions, including public sector organisations, are transparent and open. As we adopt these technologies at work and in our daily lives, we all need the skills to make them work for us. Kenny Gibson mentioned the need for strong governance, which is true, and Willie Coffey talked about a general awareness of how our data is being used, whether that is in singing competitions or anything else.
AI raises new ethical issues relating to in-built biases. Quite recently, I was at an event at which it was mentioned that people who are developing AI solutions are moral engineers, because they build their moralities, biases and prejudices into the apps that they build. AI is only as good as the data that it is based on, and it will perpetuate existing biases and discriminations if it is not carefully designed and governed.
I have reached 5 o’clock. I could go on at length about the other points that have been mentioned but, all in all, there is an understanding of the need to get the ethical framework right before we reap the benefits that AI will bring.