TechScape: The new law that could protect UK children online – as long as it works
Thanks to a new act that could reshape the internet, TikTok, Instagram and other platforms will need to tame' harmful content and algorithms
Don't get TechScape delivered to your inbox? Sign up for the full article here
The Online Safety Act in the UK is, quietly, one of the most important pieces of legislation to have come out of this government. Admittedly, the competition is slim. But as time goes by, and more and more of the act begins to take effect, we're starting to see how it will reshape the internet.
Social media firms have been told to tame aggressive algorithms" that recommend harmful content to children, as part of Ofcom's new safety codes of practice.
The children's safety codes, introduced as part of the Online Safety Act, let Ofcom set new, tight rules for internet companies and how they can interact with children. It calls on services to make their platforms child-safe by default or implement robust age checks to identify children and give them safer versions of the experience.
The Goldilocks theory of policy is simple enough. If Mummy Bear says your latest government bill is too hot, and Daddy Bear says your latest government bill is too cold, then you can tuck in knowing that the actual temperature is just right.
Unfortunately, the Goldilocks theory sometimes fails. You learn that what you actually have in front of you is less a perfectly heated bowl of porridge and more a roast chicken you popped in the oven still frozen: frosty on the inside, burnt on the outside, and harmful to your health if you try to eat it.
The code is weak on design features, however. While the research shows livestreaming and direct messaging are high risk, there are few mandatory mitigations included to tackle them. Similarly, the requirement for measures to have an existing evidence base fails to incentivise new approaches to safety ... How can you provide evidence that something does not work if you don't try it?
As we celebrate the arrival of the draft code, we should already be demanding that the holes in it are fixed, the exceptions readdressed, the lobbyists contained.
Chain-of-thought responses from language models improve performance across most benchmarks. However, it remains unclear to what extent these performance gains can be attributed to human-like task decomposition or simply the greater computation that additional tokens allow. We show that transformers can use meaningless filler tokens (eg, ......') in place of a chain of thought to solve two hard algorithmic tasks they could not solve when responding without intermediate tokens.
Continue reading...