Response to the DCMS Online Harms White Paper

We have submitted a consultation response, which is published below, to the Government's recent Online Harms whitepaper.  The response answers several of the consultation questions, but also aims to provide some insight into the work we do and the problems we face helping schools to safeguard the children who are under their care.

Opendium provides sector-leading internet filtering and monitoring solutions that enable schools and colleges to implement a truly effective safeguarding strategy. For more information or to book a demonstration, please visit opendium.com/demo

 

Dear Online Harms Team,
 

I have read the DCMS's Online Harms white paper with interest. The paper obviously has a very broad remit of protecting everyone, and this is certainly important and clearly places a lot of responsibility on the service providers themselves. However, the role of carers, such as schools and parents, is also extremely important and often overlooked.

As a company, Opendium have been helping schools with their online safety for 14 years, and the directors of the company have each been working in this field for between 19 and 24 years in various roles, so I hope to provide some insight from perspective that may not have been as closely considered.

Opendium is a British business, providing internet filtering and monitoring solutions that enable schools and colleges to implement an effective safeguarding strategy. We have always worked closely with schools to design systems that meet their specific needs, rather than trying to shoehorn a business solution into an educational environment.

In the first part I'll respond to some specific questions that were asked in the white paper. In the second part I'd like to provide some insight into the work we do and the problems we face. We would like to engage with all relevant government departments on these issues, which are relevant not only to DCMS, but also to the Department for Education and the Home Office.

Responses to Questions

For many of these questions we have little comment to make, but three questions stood out as areas in which we have experience.
 

Question 11: A new or existing regulator is intended to be cost neutral: on what basis should any funding contributions from industry be determined?

It is important to ensure that start-up businesses do not face barriers to entry. When we created Opendium, we began as a very lean business, choosing not to start off with the business heavily in debt. In order to operate in our field, a company has to be a member of certain organisations, and although it may not seem like much, a £1000 annual membership fee here or there can be very significant to a lean start-up business.

Although we always aim to stay lean and agile, these days we can comfortably afford these fees, and we do not expect to be within the new regulator's scope so this will likely not affect us. However, I do believe that the regulator must be mindful of not creating barriers to entry for new businesses and that businesses under a certain size should therefore probably not be expected to fund the regulator.
 

Question 12: Should the regulator be empowered to i) disrupt business activities, or ii) undertake ISP blocking, or iii) implement a regime for senior management liability? What, if any, further powers should be available to the regulator.

I am concerned that ISP blocking could reasonably only be applied to smaller businesses. The very large businesses, who have a disproportionate impact on the public's safety would inevitably be excluded from this action. Were the regulator to feel it necessary, it is practical for a lesser used service to be blocked at the ISP level. However, it seems to me that the public outcry associated with blocking the public from accessing a huge service, such as Facebook, Google, Twitter, etc. would preclude taking such action against them.

 

Question 17: Should the government be doing more to help people manage their own and their children's online safety and, if so, what?

Filtering systems categorise web sites using a variety of mechanisms, but there are limits to their accuracy. This is especially true when it comes to web sites that predominantly contain only pictures and video and have a mixture of both acceptable and harmful material.

Examples of these are Twitter and Tumblr: much of their content is perfectly acceptable and often desirable for children to access, but at the same time they also contain a lot of pornography. It is worth nothing that despite Tumblr's recent "pornography ban", there is still a huge amount of pornography available on the platform.

Websites, such as Twitter, often already know which content is pornographic and allow users to filter out content, but these filters are platform specific and under the control of the user themselves, rather than centralised filters that the parents or the school can use across the entire internet.

The upcoming age verification requirement for pornography websites may go some way towards helping. However, it would help parents and schools to protect children if web sites were required to insert standardised metadata into content that they know is pornographic, so that third party filters could more accurately block access to it, irrespective of the user's account settings. There are a number of existing metadata standards that could be used, such as:

  • Restricted to Adults (RTA) Label

  • Voluntary Content Ratings (VCR)

  • Internet Content Rating Association (ICRA) content descriptions

  • Protocol for Web Description Resources (POWDER)

Some websites already do include such metadata, but in a non-standard form. Standardising this would be relatively straight forward for them, were this to be mandated, but without such legislation there is little incentive for them to do so.

Microsoft also advocate that web sites should filter out adult content if the client sends a "prefer: safe" HTTP header, but this is also not widely supported.
 

I would also like to make a note in relation to the duty of care that service providers have to children. Service providers should not be able to simply hide behind the age restrictions that are specified in their terms and conditions. If the terms state that users must be over 18, but no robust age verification is used, service providers cannot just assume that there are no children using their services and that they therefore do not have a duty of care to any of their users that are children.

Online Safety in Schools

When people think about the technologies used for online safety in schools, the first thing that comes to mind is usually just blocking children's access to pornography. Of course this remains an important aspect, but modern online safety technology has a much broader remit than just this.

A school online safety system must:

  • block access to grossly inappropriate content;

  • not overblock access to legitimate content;

  • allow the school to control access to content that would be disruptive during lessons; and

  • provide understandable safeguarding reports.

This last point is very important for a couple of reasons:

Firstly, it is not possible to guarantee that all harmful content will be blocked whilst also allowing access to all legitimate content. The DfE's Keeping Children Safe in Education guidelines quite correctly say that overblocking is extremely detrimental to children's education. Schools must have a somewhat relaxed attitude to filtering in order to reduce overblocking, and use their system's reporting capabilities to alert them to abuses and allow them to handle them through both disciplinary measures, and by tailoring their online safety curriculum.

Secondly, the internet is such an integral part of children's lives these days, that safeguarding reports from the school's online safety system are an important way for schools to spot both online and offline concerns which may need intervention. These reports can provide warnings of children who are being bullied, those who are at risk of self harming or who may be the subjects of abuse.

Unfortunately, as an online safety business, we frequently find that the large technology businesses are actively working against schools efforts to provide a safe space for the children under their care.

For everyone's security, it is important for internet traffic to be encrypted. Operating systems provide a framework to encrypt traffic, whilst allowing designated systems to decrypt and inspect the traffic. This allows the school's systems to be authorised to inspect, block and report on children's internet use. Importantly, this does not undermine security, as the traffic is still protected from unauthorised decryption. In fact, authorised decryption can improve security by allowing viruses and malware to be blocked as they are brought onto the school network.

However, most of the big social networking businesses have designed their mobile apps to bypass this standard framework and prevent such traffic inspection. This means that schools are left with the choice between blocking access to the whole social network, or allowing free-for-all access with no chance of alerting staff to concerning behaviour such as bullying, blocking access to groups which promote suicide, etc.

Schools are expected to block access to terror related content that the Counter-Terrorism Internet Referral Unit has listed, and to child abuse image content that the Internet Watch Foundation has identified, but apps are frequently designed in a way which prevents this kind of filtering.

In 2016, Google changed Android's security framework to be hostile to the standardised authorised decryption mechanisms which schools rely on, and have refused to engage with schools and online safety businesses over this issue. As a result, content accessed by most Android apps cannot be filtered by school systems, beyond simply allowing or disallowing the whole app. This was cited as a significant concern by a number of online safety suppliers at last autumn's conference which was organised by the Home Office and CTIRU.

Protecting the public from unauthorised access to their data is of course extremely important. Members of the public need to feel that their information is safe from being snooped on by criminals or governments. However, there must be a balance and there is a difference between the protections that adults require and those of minors. We certainly don't want to stand in the way of improved security for everyone, but up until now, the large technology businesses have pursued new security technologies and imposed them on society without due regard for their negative impact on schools' ability to safeguard the children under their care.

If the businesses who dictate the direction of common technologies will not stop to discuss the serious issues that they are creating, there seems little alternative but to legislate. Upcoming technologies, such as TLS 1.3 and DNS-over-TLS will certainly only further weaken children's safety. We strongly feel that school and online safety businesses should be involved in discussions about how security can be improved without unduly harming child safety, rather than having their concerns ignored and swept aside by huge businesses that feel they are untouchable.
 

Whilst it may be possible to legislate to ensure that businesses take account of schools' safeguarding duties, there will always be technologies that cannot be adequately filtered and reported upon. There is very little guidance available to schools about how to handle these technologies, and that was also a concern raised by a number of suppliers at last autumn's CTIRU conference. We are disappointed that this was not addressed by the DfE's recent education technology strategy publication.

Apps such as WhatsApp are invaluable for allowing legitimate communication, especially for children in boarding schools to talk with their parents. Indeed, Ofsted have been known to criticise schools for overzealous blocking of such technologies. But end-to-end encryption also allows them to be used for more nefarious purposes, such as online grooming and bullying.

Blocking these apps relieves the school of its liability, but prevents their legitimate use and serves to push children away from the school's relatively safe network and onto the mobile networks where the school's ability to safeguard is greatly reduced. On the other hand, allowing them keeps the children in the relative safety of the school network and allows the benefit of the apps, but schools obviously have significant concerns that they will be held responsible if there are any safeguarding problems as a result.
 

None of these questions have easy answers, but they certainly do need to be addressed. We have always worked very closely with schools, and likewise, have always looked to work with government wherever possible.
 

Yours faithfully,

 

Stephen Hill BSc (Hons.)

Technical Director, Opendium