Twitter is something of a double edged sword, for its millions of users. On one hand, conversations are fast-flowing, free, and open, and a single retweet can bring that smart thing you said to everyone’s attention. Conversely, a single retweet can bring that smart thing you said to the attention of a roving hate mob, making your life utterly miserable and possibly putting you in actual danger.
Twitter’s been saying for years that it needs to improve its tools for mitigating abuse and harassment, and for years users have been finding each new option insufficient at best. But this time, the company’s leadership promises, they’re going to make good changes. For real.
Twitter CEO Jack Dorsey made the announcement yesterday on — where else? — Twitter.
“We’re taking a completely new approach to abuse on Twitter,” he tweeted, “Including having a more open and real-time dialogue about it every step of the way.”
That announcement, in turn, led to a series of tweets from Ed Ho, VP of Engineering, where he laid out the general plan.
“Making Twitter a safer place is our primary focus and we are now moving with more urgency than ever,” Ho wrote. “We heard you, we didn’t move fast enough last year; now we’re thinking about progress in days and hours, not weeks and months.”
Users can expect both public-facing changes and also invisible, back-end ones coming soon, Ho promised, beginning with “long overdue fixes to mute/block and stopping repeat offenders from creating new accounts” as soon as this week.
Twitter users may be forgiven, however, for taking a “we’ll believe it when we see it” stance. Previous CEO Dick Costolo admitted in 2015 that, “we suck at dealing with abuse and trolls,” yet both everyday and high-profile incidents — like a hate mob that mobilized against actress and comedian Leslie Jones last summer — have continued to occur.
Twitter did unveil a few new tools in November, allowing users to mute not just specific accounts, but also specific keywords and conversation strings.
At that time, Twitter also updated its reporting function, so that users either experiencing or witnessing harassment and abuse could more accurately complain about the kind abuse taking place.
by Kate Cox via Consumerist