Troll farming

When did the Internet lose its innocence? In 1993 pioneering advocate for life online Howard Rheingold wrote in The Virtual Community:

“People in virtual communities use words on screens to exchange pleasantries and argue, engage in intellectual discourse, conduct commerce, exchange knowledge, share emotional support, make plans, brainstorm, gossip, feud, fall in love, find friends and lose them, play games, flirt, create a little high art and a lot of idle talk” (3).

Rheingold’s main point here was that online culture is highly varied, “an ecosystem of subcultures, some frivolous, others serious” (3), plus the amazing fact that people conduct social commerce online as in life generally, but without depending on bodily presence: “we leave our bodies behind” (3).

Virtual communities sounded benign and enabling. The threats were from outside. Corporatisation, censorship and the corruption of the grass roots character of online life provided the most serious threats to virtual communities.

Later in the same book, Rheingold referenced the issue of abuse. But that was about how online communities can help victims overcome the trauma of domestic abuse. Online sociability wins through. Online abuse gets scant mention.

Misadventure at the margins

Early commentators recognised online threats and malevolence, but examples resided at the margins — and were even paraded as intellectual enablers that prompted discussion about ethics both on and off line. Misadventure at the margins presented as a fascinating oddity. After all, the Internet was created, promoted and governed by articulate, liberal, progressive ethical communities.

Think of the infamous reflections on virtual rape, and the article “A rape in cyberspace.” The circumstances of the so-called “rape” were a peculiar, singular case of online abuse amidst a constellation of polite norms. Discussion about the case disrupted online life and provoked reflection about ethics in role play and real life — at least amongst academics.

That social media provides a window into society provides a similarly benign construction of the value of online communities. See my post: The Internet as research tool. You can scrape social media, twitter and Instagram feeds to find out what people are talking about, where they are when they do it, and what they think is important. With caveats about selectivity and biased sampling such feeds provide some insights into the mood of a community.

But online life has lost its innocence. Now it is in the hands of practically everyone and anyone online abuse and bullying seem to have increased, and there’s Internet addiction to contend with. As with other media, people with a cause use social media to influence and persuade others. In alarmist terms, social media is a vehicle for distributing propaganda.

Hidden persuaders

Now we know that there are organisations with rooms full of computer operatives monitoring news and social media feeds, both public and private, and intervening in the flow of information and social media communications in an attempt to influence what gets passed on and talked about online.

This is new. Print and other mainstream media have had to declare when they are delivering messages or running ads on behalf of a political party. It seems that sites on Facebook deliver political messaging but without declaring their affiliations. Such messaging may seek simply to cause upset, discord and division. This is the charge levelled at malignant foreign agents whose propaganda efforts focus simply on sowing confusion.

Under this scenario, state run troll farms deliver strategically disruptive messages, links and articles amidst a substantial quantity of confusingly truthful and well informed messaging, commentary and opinion. If the majority of the messages seem truthful, then untruths sneak through.

Counter espionage

Now there are moves to counter such online political espionage. August 2017 saw the launch of a new US site (Hamilton68) that purports to monitor malevolent twitter feeds sourced from Russia. The blog site highlights the problem with automated disinformation:

“these disinformation networks also include bots and trolls that synchronize to promote Russian messaging themes, including attack campaigns and the spreading of disinformation. Some of these accounts are directly controlled by Russia, others are users who on their own initiative reliably repeat and amplify Russian themes.”

The site claims to track Twitter accounts that

“use automation to boost the signal of other accounts linked to Russian influence operations. … These accounts may be bots, meaning a piece of computer code that tweets automatically based on pre-determined rules, or they may be cyborgs, meaning that some of their activity is automated, which is then manually supplemented by a human user.”

Digital espionage and counter espionage thus subvert attempts by benign academic researchers to read the mood of a community. If nothing else, such intervention sullies the data. Can we ever trust social media feeds as indicators of mood, opinion and political motivation?

Notes

  • See The Young Turk’s discussion: “Russian Troll Army Impersonating US Muslims On Facebook” https://youtu.be/h6sRHf-j9xw
  • Image is of the artwork Keyframes, Edinburgh, Feb-Mar 2016.

References

2 Comments

Leave a Reply