Sunday, 22 April 2018

A Fundamental Problem with Twitter

Twitter Logo
Possibly about to fly into a window.
I've not written much on here lately. There's a few reasons, mainly due to doing a lot of extra hours at work, but also because I've fund myself spending more and more time on Twitter (as @66ALW99). However, I've been finding it more and more infuriating lately, and here's why..

In many ways, Twitter is a great medium. One reason that particularly appeals is that it allows a more-or-less real-time conversation with people and that rapid dialogue is pretty addictive. You post something and, often within minutes, you're having a chat with someone about what you've just posted. And the anonymity encourages you to engage with the ideas expressed, as you don't necessarily know the actual person you're talking to.

Well, that's the theory. Anyone using Twitter lately will have noticed the amount of bots and trolls using it now, which Twitter are either unable or unwilling to manage. Sure, every so often, you hear about a purge of these accounts, but it's forgotten within days as the number of bot and troll accounts rises inexorably. Purges are a temporary reprieve at best.

So, given that Twitter cannot or will not deal with the problem, your options are basically to block or mute the accounts yourself. For a long time, I used these tools sparingly. If someone is expressing an idea, I figure I should hear it - even if it's one that I'm highly critical of. As an example, there seem to be more and more accounts expressing Far-Right viewpoints over the last year or two. I completely oppose these views, for reasons that I hope are obvious to the majority of decent people. But.. if I block or mute them, then I'm not able to challenge the ideas and the thinking behind them. If everyone who is against these ideas does the same, these ideas will be left completely unopposed. With the reach that social media has, I find this a troubling proposition.

What are Bots and Trolls?

Just quickly, for those wondering about terminology:

A Bot is a largely automated account that operates according to a set of rules. Eg, it might look for certain hashtags and retweet them. Or it might automatically retweet posts from certain accounts. It may have a set of premade tweets that it will post at intervals. Or it may do all these things and more - there are many variations. (As an interesting aside, there's a variation sometimes called an Android, that does these automated thing but also allows it's operator to take manual control when needed.)

A Troll account is generally not automated, but has a human in control. Russian bots are the most well-known of these, but there are other sources of trolls. These can be nations, like Russia or China, or they may be smaller groups dedicated to particular topics like climate change denial. There is a good New York Times article here about Russian "troll farms" here.

A troll, on it's way to it's St Petersburg workplace.
The common thread is that both bots and trolls exist to post views that, in an organised way, reflect a particular agenda.


So Twitter has a large and increasing amount of these organised networks of accounts that exist purely to promote certain views, quite often based on deeply abhorrent ideologies like Nazism. There has to be pushback against these ideas. Yet Twitter is either unable or unwilling to deal with these accounts themselves, which means, if the ideas are to be challenged, it's left up to the Twitter users themselves. So a disparate band of individuals is left to deal with the mess caused by an organised group of well-financed, often state-resourced agents spreading misinformation and propaganda. Despite the involvement of many smart and passionate people I've had the pleasure of interacting with on Twitter, this is clearly not an evenly matched battle. On top of this, one tactic of troll farmsis to wear opponents down to the point where they break off engagement.

As a Twitter user, you can challenge expressed views, but there are limits to how many you can engage with. You can report accounts to Twitter, but my own experience is that this is a very hit-and-miss affair. Even if an account gets suspended, the people behind it can just open another one.

Basically, Twitter users have the following options:

Hey, I won't judge.

Ignore, mute or block offending accounts - in which case the views will then be left unchallenged.

They can engage the accounts and challenge the ideas - as bots are automated and trolls are literally paid to waste the time of opponents, this is clearly not going to work.

Reporting bots and trolls is the last option. With the amount of bots and trolls, this is virtually a full-time job and, as I've said, it's a hit-and-miss affair because Twitter just don't appear to be on the side of genuine users of their service.

Why do I say that? Well, that could be a whole post in itself, but I'll try to summarise...

From my own experience, there are days when I report upwards of a hundred accounts - if I'm lucky, maybe ten or so will have some kind of action taken against them. I've reported clear and provable cases of defamation only to have no action taken against the account responsible. Other users I've spoken to agree, although I recognise that there may be some element of confirmation bias at play here.

I've seen people express the view that this is because Twitter is in agreement with the views that are being reported - ie, the owners are actual white supremacists, climate change deniers and so on. I suspect the actual reason is much more prosaic:

Twitter is a business


Despite all the usual talk about mission statements and providing a voice and, frankly, blah-blah-blah, the fact is that Twitter exists to make money. It has shareholders. They're going to be very careful about doing anything that may drive their share price down. There are two main ways that this impacts on their service, in my opinion.

Firstly, their stock value is tied to the amount of users they have. As a service that is "free" to the users, their business model is based on being able to target their usersfor advertising purposes. Obviously, the more users they have the bigger their reach and the more valuable they are as an advertising medium. If they suddenly got rid of 15% of accounts, that's going to have an impact on their perceived worth to advertisers. Of course, in reality, bots and troll accounts are worthless to advertisers, but the perception would be that Twitter just lost 15% of their users. To combat the perception, Twitter would have to admit their bot/troll problem which, again, would not be good in terms of share price because now people will be wondering just how many of the accounts they're advertising to are actually bots or trolls?

Secondly, to actually deal with the problem, Twitter would have to hire a LOT more people as moderators, to check which accounts are real people and which exist solely as oragnised agenda-pushers. These people would also need training and based on my own experiences and chats with other people, the current level of training is simply not sufficient. I say that because recognising if an account is, for example, pushing racist views has to be a little more sophisticated than simply checking to see if derogatory terms for minorities have been used. The people pushing racist views are simply not that obvious anymore.  A moderator would have to be familiar with the narratives so that when they see the word "globalists", they can recognise that this is probably an anti-semitic dogwhistle meaning "Jews". They'd have to know that when someone talks about "not being replaced", they're probably referring to a far-right conspiracy theory that says there's an organised plan (usually by the "globalists" to get so many migrants into "white" countries that everyone eventually becomes mixed-race (yeah, I know). This is some pretty in-depth stuff that moderators would need to be familar with, and it'll cost quite a bit of money to train people enough to be familar with these ideas and recognise them when they see them. But again, stock price - hiring more people means higher operating costs, which will hit stock prices, so naturally Twitter will not want to do this.

As an optional "Thirdly", Twitter really needs to make it much easier for users to flag problematic accounts. I use the Twitter desktop app for Windows, and it's ridiculous that there is NO option to report an account for anything other than Spamming. If I see something abusive, I have to either open up twitter.com in a browser or switch to my phone, find the offending tweet and then report it from there. I certainly don't mind reporting accounts that are being deliberately offensive or engaging in hate speech but come on, Twitter, it shouldn't be my JOB - I'd much rather be looking at pictures of funny cats or whatever. You know, the things that actually make Twitter fun. Your interface for reporting accounts is terrible (why is there no option for reporting a suspected bot, as just one example?). Seriously, we're on Twitter because we like what it has to offer. We don't mind giving some help towards keeping it that way.

So, The "Fundamental Problem with Twitter"


To keep genuine users on Twitter, Twitter needs to be enjoyable. Right now, I often feel like Twitter is a second job and, frankly, I'm working enough hours at my actual job, the one that pays me. I shouldn't have to spend hours every week arguing with Nazis and reporting trolls. But if I, and others, don't do this, your platform becomes an unopposed cesspit of bigotry. At some point, though, unless there's more support from Twitter, people are simply going to give up on it. A big part of the reason I've written this is because I've felt close to it myself. I don't WANT to go, but I feel as though I'm being driven off the platform and there's no support from the people who are supposedly in charge of it.

To put it bluntly, Twitter is going to either invest in running their platform properly or they're going to lose genuine users of it.







No comments:

Post a Comment