What happened in Christchurch was horrific, and an example of how divisiveness opens the door to violence. There is no excuse for ethnic-based hate and murder. As a tech practitioner, the way the internet and social media was weaponised by the alleged perpetrator in a way that even the social networks can’t scrub off reveals just how much vigilance and know-how is needed to manage our digital existence.
While mainstream news networks and parents all over the internet panicked over a “Momo challenge” that did not exist and wrung their hands over “predators in Roblox”, surveillance networks designed to sift through HUMINT and SIGINT failed to recognise posts on well-known public internet sewers that distinctly spoke of planned violence. That’s simply by design. The scope of the query simply excluded some parameters in favor of others, as is often the case with systems built by white westerners.
It is evident that the algorithms, AI, and Content ID can’t catch these things fast enough to stop them from being remixed and spread further, and none of the tech giants want to put thousands more people to moderate every single livestream, video upload, and forum thread (if it’s even possible). How do you weed out the signal from the noise? This is the challenge we face as professionals, as parents, as people.
25 years ago, when we first welcomed the shrill beeping box into our homes little did we know what we were in for. I remember how much of an adrenaline rush it was. We were excited to be able to find anything, talk to anyone and go anywhere, and in our optimism completely forgot that the darkest parts of humanity found a space to make their homes too. Well, maybe we didn’t forget, but rather we let that same optimism blind us to the not-so-rosy-possibilities.
The next 5 years will be critical for us. Our children are born and growing into a world of near-constant connectivity and access. They’ll navigate media creation, consumption and social networking in a way we will not be able to comprehend and faster than the tools to moderate them can be created. How do we prepare them to be able to tell what’s real from a deepfake? Would we even know the difference ourselves? Parsing the future will require a level of mental bandwidth and acuity far beyond what my generation ever needed growing up – but that’s if you want to try to be on top of it. The alternative is to let go, and see where that takes us. I’m afraid this latter option will lead us nowhere good.