Children and Young People: Digital Technology - Motion to Take Note

Part of the debate – in the House of Lords at 3:25 pm on 17th January 2019.

Alert me about debates like this

Photo of Baroness Kidron Baroness Kidron Crossbench 3:25 pm, 17th January 2019

My Lords, I am very grateful to all noble Lords who have chosen to speak this afternoon, and very much look forward to each of their contributions. I refer the House to my interests on the register, particularly that as founder and chair of 5Rights.

Fundamental to this debate is the fact that we invented a technology that assumes that all users are equal when, in fact, a third of users worldwide and a fifth of users in the UK are children. It has been 150 years since we pulled children out of the chimneys and put them into school. Since that time we have fought on their behalf for privileges, protections and inalienable rights that collectively constitute the concept of and offer a legal framework for childhood.

Childhood is the journey from infancy to maturity, from dependence to autonomy. We design and mitigate for it in multiple ways across all aspects of society. We educate; we require doctors to obtain additional skills to practise paediatric medicine; we do not hold children to contractual obligations; we put pedestrian crossings near schools; we rate films according to age. Children have special protections around sexual activity. It is illegal for kids to smoke, drink and gamble. We even take steps to protect them in environments where adults smoke, drink and gamble.

In short, we provide a complex but widely understood and respected set of social norms, educational frameworks, regulatory interventions and national and international laws reflecting the global consensus that society as a whole must act in the best interests of the child, in the light of the vulnerabilities and immaturities associated with their age. The digital environment fails to reflect that consensus, and the cost of that failure is played out on the health and well-being of our children.

In setting out this afternoon’s debate, I shall concentrate on three areas: the nature of the digital environment, my concern about the way we conceive online harms and, finally, how we might support children to flourish. For children in the connected world, there is no off or on. Their lives are mediated by technological devices and services that capture infinitesimal detail about their activities, frame the choices available to them and make assumptions—not always accurate—about who they are. Theirs is not a world divided by real and virtual; it is a single lived experience augmented by technology. The vast majority of a child’s interactions are not deliberate decisions of a conscious mind but are predetermined. A child may consciously choose to play a game, but it is machine-engineered Pavlovian reward loops embedded in the game that keep them playing. A child may consciously opt to participate in a social group, but it is the stream of personalised alerts and the engineered measures of popularity that create the compulsive need to attend to that social group. A child may wish to look up a piece of information, but it is the nudge of promoted content and automated recommendation that largely determines what information they receive.

Those predetermined systems are predicated on a business model that profiles users for commercial purposes, yet businesses that sell devices and services in the digital environment deliver them to children with impunity—even though we know that screens eradicate the boredom and capacity for free play that very young children require to develop language, motor skills and imagination; even though we know that a single tired child, kept awake through the night by the hooks and notifications of a sector competing for their attention, affects the educational attainment of the entire class; and even though we know that for teenagers, the feedback loops of social validation and competition intrinsic to social media play an overwhelming role in their state of mind and ability to make safe choices.

The children we work with at 5Rights make the case that it is simply not possible to act your age online. As one young boy said, “Online, I am not a kid but an underage adult”. His Royal Highness the Duke of Cambridge said about the tech sector:

“Their self-image is so grounded in their positive power for good that they seem unable to engage in constructive discussion about the social problems that they are creating”,


“fake news, extremism, polarisation, hate speech, trolling, mental health, privacy and bullying”.

Last year, I was in Africa when a young girl was auctioned as a bride on Facebook. I have sat with the parents of a child bullied to death online. I have been with a young girl at the devastating moment in which she realised that she had been taping sexual acts for a group, not just for the man with whom she thought she was in a relationship. I have been witness to scores of children who have ruined their family life, educational opportunities, reputation and self-esteem through overuse, misuse, misunderstandings and straightforward commercial abuse. An individual child does not, and should not be expected to, have the maturity to meet the social, sexual, political and commercial currency of the adult world.

In December, the Nurture Network, a multidisciplinary group of academics, mental health workers and child development experts, agreed that the three existing agencies of socialisation—family, friends and school—have now been joined by a fourth: the digital environment, an environment of socialisation in which the status of children is not recognised. In an interconnected world, the erosion of the privileges, protections and rights of childhood in one environment results in an erosion of childhood itself.

That brings me to my concerns about how we conceive harms. I will briefly raise three issues. First, our public discourse focuses on a narrow set of extreme harms of a violent or sexual nature. Ignoring so-called “lesser harms” misunderstands that for a child, harms are often cumulative. It fails to deal with the fact that one child will react violently to an interaction that does not harm another, or that vulnerable groups of children might merit specific and particular protection. Crucially, it ignores the fact that for most children, it is the quotidian and the everyday that lowers their self-esteem, creates anxiety, and inflicts an opportunity cost in which education, relationships and physical and personal development are denuded, rendering children—or, should I say, “underage adults”?—exposed and unprotected. Children’s rights are deliberately conceived as non-hierarchical. We must take all harms seriously.

Secondly, it is not adequate to define children’s experience of the digital environment in terms of an absence of harm. As long ago as 1946, the World Health Organization declared that well-being was,

“not merely the absence of disease or infirmity”.

The NHS defines it as a feeling of “physical, emotional and psychological” well-being. We must set our sights not on the absence of harm but on a child’s right to well-being and human flourishing.

Thirdly, whether we are tackling the problems of live streaming, child sexual abuse, gaming addiction or thinking towards a new world order in which the fridge knows more about your child’s dietary tastes than you do and can exploit that fact, we must not wait until harm has been done but consider in advance the risks that children face. Technology changes fast, but the risks consistently fall into four categories: content risks, both unsuitable and illegal; contact risks, often, but not always, involving an adult; conduct risks, involving risky behaviour or social humiliation; and contract risks, such as exploitative contractual relationships, gambling, aggressive marketing, unfair terms and conditions, discriminatory profiling and so on. Most experts, including many in the enforcement community, consider that upstream prevention based on militating against risk rather than waiting for the manifestation of harm is by far the most effective approach.

There is much we can do. The Minister knows that I am not short of suggestions, but I will finish with a modest list. The digital environment is now indivisible from other environments in which our legal and regulatory arrangements embody our values. Parity of protection has been called for by the NSPCC. It was the approach taken in the Law Commission’s Abusive and Offensive Online Communications: A Scoping Report, and was articulated by the noble Lord, Lord Stevenson, in establishing that the Health and Safety at Work Act 1974 applies equally to artificial intelligence. What plans do the Government have to bring clarity to how our laws apply to the digital environment? Specifically, will the Government bring forward a harmonisation Bill to create an obligation to interpret legislation in a manner that offers parity of protection and redress online and offline, in a similar manner to Section 3 of the Human Rights Act?

Designing out known risk, often referred to as safety by design, is standard across other sectors. We like our brakes to work, our food to be free of poisons and our contracts to be fair in law. The Secretary of State has said that he is minded to introduce a duty of care on the sector. That is very welcome—but to be effective, it must be accompanied by impact assessments, design standards, transparency reporting, robust oversight and a regulator with the full toolkit of persuasion and penalty. Can the Minister confirm that the Government are planning this full suite of provisions?

The age-appropriate design code introduced by this House demands that companies anticipate the presence of children and meet their development needs in the area of data protection. I hope that the Minister will confirm the Government’s determination to produce a robust code across all areas of design agreed during the passage of the Data Protection Act. The code’s safety by design approach could and should be an exemplar of the codes and standards that must eventually form part of an online safety Bill.

Finally, companies make many promises in their published guidelines that set age limits, content rules and standards of behaviour, but then they do not uphold them. It is ludicrous that 61% of 12 year-olds have a social media account in spite of a joining age of 13, that Facebook says that it cannot work to its own definition of hate speech or that Twitter can have half a million pornographic images posted on it daily and still be characterised as a news app. Subjecting routine failure to uphold published terms to regulatory penalty would prevent companies entering into commercial contracts with underage children, drive services to categorise themselves accurately and ensure that companies say what they do, do what they said and are held to account if they fail to do it. I would be grateful if the Minister could confirm that this measure will be included in the upcoming White Paper.

Technology is often said to be neutral, and when we criticise the sector we are told that we are endangering its promise to cure cancer, educate the world and have us experience space travel without leaving our home, or threatening the future prosperity of the nation. Technology is indeed neutral, but we must ask to what end it is being deployed. It could in the future fulfil the hope of its founders and offer the beneficial outcomes for society that we all long for—but not if the price is the privileges, protections and inalienable rights of childhood. A child is a child until they reach maturity, not until the moment they reach for their smartphone.