Chat Control or Child Protection?
October, 2022
Abstract
Ian Levy and Crispin Robinson's position paper "Thoughts on child safety on commodity platforms" is to be welcomed for extending the scope of the debate about the extent to which child safety concerns justify legal limits to online privacy. Their paper's context is the laws proposed in both the UK and the EU to give the authorities the power to undermine end-to-end cryptography in online communications services, with a justification of preventing and detecting of child abuse and terrorist recruitment. Both jurisdictions plan to make it easier to get service firms to take down a range of illegal material from their servers; but they also propose to mandate client-side scanning - not just for known illegal images, but for text messages indicative of sexual grooming or terrorist recruitment. In this initial response, I raise technical issues about the capabilities of the technologies the authorities propose to mandate, and a deeper strategic issue: that we should view the child safety debate from the perspective of children at risk of violence, rather than from that of the security and intelligence agencies and the firms that sell surveillance software. The debate on terrorism similarly needs to be grounded in the context in which young people are radicalised. Both political violence and violence against children tend to be politicised and as a result are often poorly policed. Effective policing, particularly of crimes embedded in wicked social problems, must be locally led and involve multiple stakeholders; the idea of using 'artificial intelligence' to replace police officers, social workers and teachers is just the sort of magical thinking that leads to bad policy. The debate must also be conducted within the boundary conditions set by human rights and privacy law, and to be pragmatic must also consider reasonable police priorities.