Travis Noakes presented a thought-provoking seminar at a recent SAME meeting in Cape Town. The presentation presented the background to online trolling, concluding with the possibility of effective strategies to end abusive conversations on internet platforms. Here is an article summarising the presentation, written by Noakes:
As a teacher of visual design, I believe there is an opportunity for visual creatives to contribute to the graphic options that online users might draw on when confronting trolls. At present, perceived victims of trolls have little context-specific, anti-trolling images to source from. For example, there is no emoji labelled ‘troll’, nor is there a graphic library that addresses the ‘100 troll types’ (Nuccitelli, 2018). I argue that exploring the use of ‘end-of-conversation’ graphics might be helpful as a new strategy that potential trolling victims might consider.
There is a group of people who seem to use technology more than others for facilitating often nefarious goals/motives. These so-called “trolls” are often described as possessing antisocial qualities and they enjoy greater opportunities to connect with similar others and pursue their particular brand of “self-expression” in today’s internet-connected world, than ever before (Buckels, Trapnell & Paulhus, 2014). Online trolls find ‘trollable’ internet platforms germane for attacking their victims. They exploit ’hot button’ issues for attention and derive pleasure from attacking their victims, who are attacked with different trolling strategies intended to elicit emotional responses from them, making them appear foolish.
All trolls take advantage of the grey area in which potential victims cannot be sure of the trolls’ intent (Hardaker, 2015). By contrast to the abuse that make overt trolls easy to deal with (Hardaker, 2013), cyber-bullying by covert trolls is insidious as it is neither obviously hateful, nor openly offensive. Covert trolls use their cognitive intelligence to mask their true intentions from victims and audiences (Zetlin, 2017). Since it is hard to spot these covert trolls’ true intentions based on the evidence they share, they present just enough evidence to appear credible so that blocking them would seem an infringement of their free speech. They even use this defense, accusing those who block them of “cowardice”, “censorship”, and “losing the argument” (Hardaker, 2013). In response, the average individual has to choose between doing the morally upstanding thing (i.e. upholding free speech) and protecting their own peace of mind through not engaging.
‘Do Not Feed The Trolls’ (DNFTT) is a time-tested strategy advising that it is fruitless to reward trolls with the attention and engagement they crave. While such advice seems highly appropriate for obvious trolls, ignoring covert ones on contemporary internet services, notably Twitter, can be problematic: trolls seem to take the opportunity to (re)frame the disagreement after victims have blocked and/or muted them. Opting out of interaction gives an aggressor the ‘last word’. The trolls may consequently misrepresent themselves as ‘victims’ for having being muted and blocked ‘undeservedly’, for example.
On the one hand, placing the focus on a victim’s response, rather than that of their perpetrators, may play into existing, problematic social dynamics around voice, that range from “misogyny” (Mantilla, 2013) to “academic mobbing” (Gorelewski, Gorelewski amd Porfilio, 2014). On the other hand, the victim’s silence may not help the troll to reappraise his behavior and confront his dubious motives for frequent online aggression. It is clear that alternative strategies are needed.
Alternative strategies to DNFTT include: exposing trolls, challenging, critiquing and mocking them, and even reciprocating with trolling (Hardaker, 2015). While such strategies have proved successful on the Usenet chat forum, they seem to present an invitation for further unwanted interaction. In contrast to these strategies, I intend to research strategies on contemporary online services that might serve as ‘end-notes’ to interaction. For example, a ‘Stop, troll!’ emoji sticker and a link to an online document that frames the covert trolls’ background and history of aggression might be helpful to ‘perceived’ victims and the audiences of their interactions with ‘alleged’ trolls.
How might anti-trolling graphics be developed for perceived victims? What do victims of trolling do with such graphics for ending abusive interactions and in engaging different audiences?
There are several rationales for exploring the development and use of anti-trolling graphics. In the first place, the compression of pictorial communication affords a far more economical communication for victims versus alphabetical writing (Danesi, 2017). It may be easy to combine emoji stickers for connoting a detailed end-of-conversation message (i.e. ‘I have muted and blocked this troll, with whom I choose not to interact’). I argue that emoji stickers may leverage positive associations that suggest control of emotions. Emoji, viewed as a form of informal communication, may make the potential victim’s difficult message more digestible, whilst also not showing a desire to engage further.
I look forward to being involved in the teaching and development of anti-trolling graphics and reporting back whether they prove helpful as a shortcut for potential victims to frame the end of interaction with their alleged online trolls.