Article 6Z5EN No, The UK’s Online Safety Act Doesn’t Make Children Safer Online

No, The UK’s Online Safety Act Doesn’t Make Children Safer Online

by
Paige Collings
from Techdirt on (#6Z5EN)
Story Image

Young people should be able to access information, speak to each other and to the world, play games, and express themselves online without the government making decisions about what speech is permissible. But in one of thelatest misguided attemptsto protect children online, internet users of all ages in the UK are being forced to prove their age before they can access millions of websites under the country's Online Safety Act (OSA).

The legislation attempts to make the UK the the safest place" in the world to be online by placing a duty of care on online platforms to protect their users from harmful content. It mandates that any site accessible in the UK-includingsocial media,search engines,music sites, andadult content providers-enforce age checks to prevent children from seeingharmful content. This isdefinedin three categories, and failure to comply could result in fines of up to 10% of global revenue or courts blocking services:

  1. Primary priority contentthat is harmful to children:
    1. Pornographic content.
    2. Content which encourages, promotes or provides instructions for:
      1. suicide;
      2. self-harm; or
      3. an eating disorder or behaviours associated with an eating disorder.
  2. Priority contentthat is harmful to children:
    1. Content that is abusive on the basis of race, religion, sex, sexual orientation, disability or gender reassignment;
    2. Content that incites hatred against people on the basis of race, religion, sex, sexual orientation, disability or gender reassignment;
    3. Content that encourages, promotes or provides instructions for serious violence against a person;
    4. Bullying content;
    5. Content which depicts serious violence against or graphicly depicts serious injury to a person or animal (whether real or fictional);
    6. Content that encourages, promotes or provides instructions for stunts and challenges that are highly likely to result in serious injury; and
    7. Content that encourages the self-administration of harmful substances.
  3. Non-designated contentthat is harmful to children (NDC):
    1. Content is NDC if it presents a material risk of significant harm to an appreciable number of children in the UK, provided that the risk of harm does not flow from any of the following:
      1. the content's potential financial impact;
      2. the safety or quality of goods featured in the content; or
      3. the way in which a service featured in the content may be performed.

Online service providers must make a judgement about whether the content they host is harmful to children, and if so, address the risk by implementing a number of measures, whichincludes, but is not limitedto:

  1. Robust age checks:Services must use highly effective age assurance to protect children from this content. If services have minimum age requirements and are not using highly effective age assurance to prevent children under that age using the service, they should assume that younger children are on their service and take appropriate steps to protect them from harm."

    To do this, all users on sites that host this content must verify their age,for exampleby uploading a form of ID like a passport, taking a face selfie or video to facilitate age assurance through third-party services, or giving permission for the age-check service to access information from your bank about whether you are over 18.

  2. Safer algorithms:Services will be expected to configure their algorithms to ensure children are not presented with the most harmful content and take appropriate action to protect them from other harmful content."
  3. Effective moderation:All services must have content moderation systems in place to take swift action against content harmful to children when they become aware of it."

Since these measures took effect in late July, social media platformsReddit,Bluesky,Discord, andXall introduced age checks to block children from seeing harmful content on their sites. Porn websites likePornhub and YouPornimplemented age assurance checks on their sites, now asking users to either upload government-issued ID, provide an email address for technology to analyze other online services where it has been used, or submit their information to a third-party vendor for age verification. Sites like Spotify are alsorequiring usersto submit face scans to third-party digital identity company Yoti to access content labelled 18+. Ofcom, which oversees implementation of the OSA, went further by sending letters to try to enforce the UK legislation on U.S.-based companies such as theright-wing platform Gab.

The UK Must Do Better

The UK isnot alonein pursuing such a misguided approach to protect children online: the U.S. Supreme Court recentlypaved the wayfor states to require websites to check the ages of users before allowing them access to graphic sexual materials; courts in France last weekruledthat porn websites can check users' ages; the European Commission ispushing forwardwith plans to test its age-verification app; and Australia'sban on youth under the age of 16accessing social media is likely to be implemented in December.

But the UK's scramble to find an effective age verification method shows us that there isn't one, and it's high time for politicians to take that seriously. The Online Safety Act is a threat to the privacy of users, restricts free expression by arbitrating speech online, exposes users to algorithmic discrimination through face checks, and leaves millions of people without a personal device or form of ID excluded from accessing the internet.

And, to top it all off, UK internet users are sending a very clear message that they do not want anything to do with this censorship regime. Just days after age checks came into effect, VPN apps became themost downloadedon Apple's App Store in the UK, and apetition calling for the repealof the Online Safety Act recently hit more than 400,000 signatures.

The internet must remain a place where all voices can be heard, free from discrimination or censorship by government agencies. If the UK really wants to achieve its goal of being the safest place in the world to go online, it must lead the way in introducing policies that actually protect all users-including children-rather than pushing the enforcement of legislation that harms the very people it was meant to protect.

Originally posted to the EFF's Deeplinks blog.

External Content
Source RSS or Atom Feed
Feed Location https://www.techdirt.com/techdirt_rss.xml
Feed Title Techdirt
Feed Link https://www.techdirt.com/
Reply 0 comments