Advertisement

Scientists can build an early-warning system for trolls

Almost every website with comments suffers from trolls, people who like to spout obnoxious and irrational gibberish just to offend others. Since you can't just ask people to behave like human beings, a lot of time and effort is spent monitoring and policing this idiocy. Thankfully, the internet's long national nightmare may now be at an end after researchers from Stanford and Cornell developed an early warning system for trolls. After conducting a study that examined close to 40 million comments, it was found that trolls can be algorithmically identified before they've written 10 posts.

The team took the comments sections from websites CNN, Breitbart and IGN, looking at the contributions of 1.7 million users over 18 months as well as the up and down votes each post got. The team then dug in to work out what differentiates a banned user from those who are deemed to be worthwhile members of the community. it turns out, perhaps unsurprisingly, that trolls are pretty easy to spot.

For instance, trolls are more likely to write less coherently and often with more profanity than other users. They're also found to concentrate their discussions in a narrow group of threads and often generate more responses than less inflammatory comments. The team thinks that this latter point is because they're adept at "luring others into fruitless, time-consuming discussions."

Naturally, while a little obnoxiousness when a new user joins a community is tolerated, this patience is worn out over time, leading to an increased rate of post deletion and banning. Familiarity also breeds contempt, and trolls are understood to post significantly more frequently than other members of a site. For instance, one candidate for banning had written 264 missives on CNN, far in excess of the average, which was 22.

Loading all of these characteristics into a computer, the team was able to cook-up an algorithm that they claim will identify trolls with a success rate of 74 percent. Now, the researchers believe that a lot more work needs to be put in before comment services will be able to shoot down negative comments before they're read, but it is, at the very least, a promising start.