Periscope abusive live comments to be voted in or out by viewers


Periscope, Twitter’s live video streaming service, is experimenting with using a “flash jury” of users to decide whether abusive commenters deserve to be blocked from the site.

The feature is one of the more inventive ways to tackle abusive comments, a problem which is particularly hard to manage on a platform where all comments are overlaid on a live broadcast and sometimes even over the face of the broadcaster.

The feature allows viewers to report comments as abuse or spam while the live broadcast is under way. A randomly selected small jury of viewers is then asked to vote on whether: they agree the comment was abuse or spam; it looks OK; or they are not sure.

If the majority vote the comment down, the offender can’t post another comment for one minute. If the offender posts a second abusive comment, they are blocked from the broadcast. Comments can also be reported under the deliberately vague “other reason” flag, which is expected to be used to report comments in other languages; broadcasters often find these irritating, yet they aren’t abusive or spammy.

The feature has been in development for six months with a small test group. Aaron Wasserman, Periscope senior engineer, said the aim was to deal with abusive comments on the site but without adding to the burden of the broadcaster by making them flag up abusive comments. Wassermann said it is significant that the random jury will be made up of people who have watched the full broadcast, so they can judge the comment against the overall context and tone of the piece.

“The thing that makes that makes [Periscope] so beautiful is it’s an intimate experience, but we realised that with that intimacy came the potential for abuse in a pretty significant way… comments are ephemeral…,” he said.”These comments are gone almost as quickly as they appear and the damage is done as quickly.”Wasserman added that the company had tried several moderation methods in the past year including allowing broadcasters to block people or to restrict audiences to their own followers only – both of which put responsibility on the broadcaster.

Sarah Haider, head of client engineering at Periscope, said some common moderation techniques involved blacklisting certain words, but that that method was too crude and could not take the context of the conversation into account. “One comment in one broadcast could be OK in another context – it’s really difficult for a machine to understand that,” she said. “No one here is comfortable making Periscope the judge.”

Wasserman added that the team would collect and analyse the results in aggregate to understand the behaviours and triggers of abuse comments, and how to improve reporting.

“We’re not going to solve internet abuse – there is no silver bullet – but my hope is that this will help,” he said. Twitter would be watching the new feature closely, he added, but the method was unlikely to be directly useful for Twitter, whose archived, published tweets work present a very different challenge to Periscope’s real-time, ephemeral comments.

Periscope does not release user numbers but said 200m live broadcasts were created in its first year, to March 2016.

Powered by article was written by Jemima Kiss in San Francisco, for on Tuesday 31st May 2016 17.00 Europe/ © Guardian News and Media Limited 2010