Don’t trust algorithms to predict child-abuse risk | Letters
Your are right to highlight councils' use of data about adults and children without their permission, alongside the warped stereotypes that inevitably shape the way families are categorised (Council algorithms use family data to predict child-abuse risk, 17 September). But the problems are more wide-ranging. In policy debates shaped by the Climbii(C) and Baby P scandals, pre-emptive interventions sound attractive, but ethical debates about what level of intervention in family life, on what basis, and how pre-emptively, still need to take place. Such debates would be necessary with accurate predictions but become absolutely crucial when, as with any risk screening programme, false positives are unavoidable. In a given population where the base rate of abuse is low, these errors will be drastically higher than commonly believed.
The buzz around big data and artificial intelligence may be leading councils to overlook not only the maths of risk screening but also the quality of their data. Our own research into child protection notes a weak evidence base for interventions, with social workers falling back on crude assumptions. Stereotypes discriminate against some families and lead to the overlooking of risk in other cases, yet may become entrenched and legitimised when incorporated into technology. Research is needed into whether these technologies enhance decision-making or whether they become uncritically relied on by pressured professionals with burgeoning caseloads. Enticed by software-driven solutions, our overstretched and decentralised child-protection system may lack the capacity for a robust ethical and evidence-based reflection of these technologies.
Dr Patrick Brown
Associate professor, Amsterdam Institute of Social Science Research, University of Amsterdam; editor, Health, Risk and Society