‘I can’t stay on script for the life of me,’ said Søraker at the beginning of his Studium Generale talk, trying to justify in advance its lack of structure. But despite his usual tangential rants and random jokes, the former UT lecturer delivered an eloquent call for tech companies and academia to work together to tackle the challenges of our information society.
His job at Google
Within the tech giant’s Trust & Safety Department, Søraker’s responsibility is to ensure the current policy principles and enforcement guidelines are clear enough to protect users from accessing hateful or harmful content.
Because of the changing political scenarios, policies need to be continuously updated. Søraker mentioned the example of the Parkland High School mass shooting in the U.S. and how conspiracy theorists online were calling the massacre a hoax and accusing the victim students of being crisis ‘actors’. ‘Being called an actor isn’t necessarily bad, but in that context, calling someone whose life was in danger and whose best friends got killed an actor is about as bad as calling someone a racial slur.’
Thus, Søraker asks ‘how can you draw lines when you have these continuous transitions?’ This task, he believes, is completely aligned with the job of many philosophers, ‘part of what philosophers do is to ask: these categories, these distinctions we have out there, are they justifiable?’ But Søraker argues that drawing those demarcations to clarify policies isn’t always easy, as in the ever contested lines between what is justifiable political dissent and what is hate speech.
Translating policies into algorithms
To further complicate things, policies should not only be increasingly detailed, but they also need to be enforceable with the algorithms that engineers can develop. As an example of the challenge of that task, Søraker used the scandal surrounding the banning of the famous ‘Napalm girl’ picture, which was flagged as inappropriate by Facebook. ‘Human enforcement said it was a depiction of a naked underage girl’, a content description that under Facebook’s policies means it should be immediately banned.
Due to the picture’s political significance, this decision was later revoked, but the AI algorithm already learnt that behavior, and the example serves to show how hard it is for machine learning to understand cultural and historical contexts. ‘Try telling engineers “write an algorithm that gets rid of all the pictures of naked underage girls except when they have historical value”.’
Can we reach a consensus?
Søraker acknowledged that making decisions often involves making compromises and political choices, but tech companies are still doing them all the time. ‘The tech industry should not make these decisions by themselves. Google, Facebook, Twitter, they should not be the arbiters of epistemological or moral truths.’
Just like in the old days, Søraker delivered an inspiring and funny lecture, that even though it might not have been scripted, was a refreshing reminder to everyone in the Amphitheater that the knowledge produced in academia can be of great relevance for deciding how to develop technologies that shape our societies.