He could raise the tone of discourse with his observations, including that in the interests of accuracy, SpongeBob SquarePants should be called SpongeBob RectangularPants and that it is not sensible for cowboys to have that name when they don’t ride cows.
He might even be able to provide some guidance on what is obnoxious, now his father has had to tortuously explain the term after applying it to aspects of his behaviour.
I am among those who have never felt any enthusiasm for the twittersphere, although some journalists seem to think they would not survive without it. Really? Is that because too many reporters, faced with the pressure to churn out too much copy without doing too much thinking or having to leave the office or interact with anyone, have relied for too long on the cobbling together of outrageous Twitter outbursts and calling it news?
I know I am old-fashioned when it comes to social media. If you were desperate, you could find me on LinkedIn, but its only use to me in all the years I have had a limited profile there, was to contact two former colleagues.
Facebook, Twitter, TikTok, Instagram and anything else are all the better for the absence of any of my incendiary instant pronouncements, my dancing (the family agree my moves from a school stage production of Hiawatha are likely to be culturally controversial and that I am risking an ACC claim with my other party trick, the Charleston), and any of my fumbling forays into photography.
Twitter has only been around since 2006, so most of its users should be old enough to remember what life was like before then (nowadays you cannot be under 13 to sign up to it, although judging from some of the infantile stuff reported, the mental age of some users may be questionable). Was life better or worse pre-Twitter is a question worth asking.
Attempts to rein in the excesses of social media companies have not been wildly successful. Any reduction in terrorist and extremist online content has been insufficient despite all the lofty talk following the Christchurch Call, the international initiative to curb such content launched by our Prime Minister Jacinda Ardern and French President Emmanuel Macron after livestreaming of the Christchurch mosques killings in 2019.
Last month, the Department of Internal Affairs released its first annual report into its investigations of online violent extremist content, covering 2021.
White supremacist content was the most prevalent and, depressingly, the livestream of the Christchurch attacks was still being shared and promoted. Also, much of the objectionable content classed as promoting a conspiracy theory ideology was related to the Christchurch terrorist attacks.
Twitter was the platform most often investigated, although it did act on all content the DIA asked it to remove, unlike BitChute, a United Kingdom-founded video-hosting service known for its far-right content. It was the second-most often investigated by the DIA.
Around the same time the DIA report was released, The Centre for Countering Digital Hate, a US-headquartered international non-profit organisation, released its report on anti-Muslim posts identified in February and March this year on Facebook, Instagram, TikTok, Twitter and YouTube.
It found 530 posts containing anti-Muslim material, viewed at least 25.5million times. These were reported to the relevant companies but 89% of them resulted in no action. Twitter did not act on 97% of posts complained of there, and no action was taken on any of the YouTube material.
Researchers found 20 posts featuring the Christchurch terrorist, of which just six were acted upon. Facebook, Instagram and Twitter, which have all committed to promptly removing terrorist and extremist content as part of the Christchurch Call, failed to remove any of the content identified, the report said.
And in the midst of this, there has been considerable disquiet about the way Facebook moderators are treated. Anyone undertaking this distressing work deserves proper payment, limits to their viewing time and paid access to readily available healthcare to keep harm to a minimum. Why should tech giants be allowed to avoid health and safety requirements which would be expected in any other setting?
There will be high hopes for the European Union’s soon-to-be-finalised Digital Services Act which will attempt to regulate online content, applying hefty fines to transgressors, rather than rely on the tech companies’ vague (and unfulfilled) promises they will do better. After all this time, it is still hard not to be sceptical.
-- Elspeth McLean is a Dunedin writer.