New Clause 14 - Providers’ judgements about the status of content

Online Safety Bill – in the House of Commons at 4:30 pm on 12 July 2022.

Alert me about debates like this

Votes in this debate

“(1) This section sets out the approach to be taken where—

(a) a system or process operated or used by a provider of a Part 3 service for the purpose of compliance with relevant requirements, or

(b) a risk assessment required to be carried out by Part 3, involves a judgement by a provider about whether content is content of a particular kind.

(2) Such judgements are to be made on the basis of all relevant information that is reasonably available to a provider.

(3) In construing the reference to information that is reasonably available to a provider, the following factors, in particular, are relevant—

(a) the size and capacity of the provider, and

(b) whether a judgement is made by human moderators, by means of automated systems or processes or by means of automated systems or processes together with human moderators.

(4) Subsections (5) to (7) apply (as well as subsection (2)) in relation to judgements by providers about whether content is—

(a) illegal content, or illegal content of a particular kind, or

(b) a fraudulent advertisement.

(5) In making such judgements, the approach to be followed is whether a provider has reasonable grounds to infer that content is content of the kind in question (and a provider must treat content as content of the kind in question if reasonable grounds for that inference exist).

(6) Reasonable grounds for that inference exist in relation to content and an offence if, following the approach in subsection (2), a provider—

(a) has reasonable grounds to infer that all elements necessary for the commission of the offence, including mental elements, are present or satisfied, and

(b) does not have reasonable grounds to infer that a defence to the offence may be successfully relied upon.

(7) In the case of content generated by a bot or other automated tool, the tests mentioned in subsection (6)(a) and (b) are to be applied in relation to the conduct or mental state of a person who may be assumed to control the bot or tool (or, depending what a provider knows in a particular case, the actual person who controls the bot or tool).

(8) In considering a provider’s compliance with relevant requirements to which this section is relevant, OFCOM may take into account whether providers’ judgements follow the approaches set out in this section (including judgements made by means of automated systems or processes, alone or together with human moderators).

(9) In this section—

“fraudulent advertisement” has the meaning given by section 34 or 35 (depending on the kind of service in question);

“illegal content” has the same meaning as in Part 3 (see section 52);

“relevant requirements” means—

(a) duties and requirements under this Act, and

(b) requirements of a notice given by OFCOM under this Act.”—(Damian Collins.)

This new clause clarifies how providers are to approach judgements (human or automated) about whether content is content of a particular kind, and in particular, makes provision about how questions of mental state and defences are to be approached when considering whether content is illegal content or a fraudulent advertisement.

Brought up.

Question put, That the clause be added to the Bill.

Division number 37 Online Safety Bill Report Stage: New Clause 14

Aye: 285 MPs

No: 227 MPs

Aye: A-Z by last name

Tellers

No: A-Z by last name

Tellers

The House divided: Ayes 288, Noes 229.

Question accordingly agreed to.

New clause 14 read a Second time, and added to the Bill.