Sverker Johansson is, indirectly, one of the most prolific editors on Wikipedia, the collectively-edited online encyclopaedia.
The Swedish science teacher, who goes by the username Lsj on the website, is the creator of Lsjbot, an automatic editor of wikipedia which helped make the Swedish language version of the site the eighth in the world to hit one million articles.
So far, Lsjbot has created 3m articles across multiple versions of the site, and racked up more than 10m individual edits. Its main task, according to Johnasson, is creating articles about all species of plants and animals, and most of those ten million edits are related to that task one way or another.
Once upon a time, bots on Wikipedia were rare, but Johansson says that these days they’re an increasingly important part of the machinery of Wikipedia. His own story is typical: “At first, around 2007, I started editing Wikipedia ‘by hand’, the same way as everybody else; then, in 2011, I started editing by bot.”
There are limits to what can be done by bots, of course. “Anything requiring real creativity or real language understanding requires a human mind directly at the keyboard,” Johansson explains. But of the tasks that can be done automatically, an increasing amount are.
“[They do] lots of maintenance work. Locating and sometimes fixing syntax errors and other anomalies in articles. Identifying vandalism.” On English Wikipedia, the bots are also used for “repairing vandalism.” And everywhere, they can be found “updating stuff, archiving old discussions, adding date stamps to manual problem reports, etc. Changing [for example] the categorisation of articles.”
Robots writing Nasa’s history?
A major hazard of that approach, though, is that articles end up being created, not because they to the sum of human knowledge, but because they can be created automatically.
In 2008 – almost prehistory, by Wikipedia bot standards – an algorithm called ClueBot II “wrote” 15,000 articles on asteroids, by parsing and rewriting public data from NASA’s database.
Those articles sat there, being edited by other bots – one changed the tags, another linked to the Japanese version, a third corrected a style guide issue – until an actual human realised that having “an out of date, broken, copy of the NASA web site” wasn’t the best way to run an encyclopaedia.
In 2012, the creation was finally undone, and today, all of Cluebot’s work lives in one ‘list of minor planets’.
‘Bots go through an approval process’
Erik Möller, the deputy director of the Wikimedia Foundation, which oversees the site, is unconcerned by examples like Cluebot.
“There is a comprehensive policy governing the use of bots,” he told the Guardian.
“Bots typically go through an approval process where a determination is made by humans whether the task they perform is useful. Bots that merely perform unnecessary busywork are either not approved in the first place or shut down.”
But he concedes that “the structured data in Wikimedia projects in particular will increasingly be maintained in automated ways, which should help keep things up-to-date and reduce the potential for human error in manually importing or updating numbers”.
Robot: freeing journalists to do journalism
Wikipedia is by no means the only site seeing an increase in the amount of content created by algorithms. Perhaps the most notable example is the Associated Press, which announced in June that a robot would be taking over the majority of US corporate earnings stories.
“We are going to use our brains and time in more enterprising ways during earnings season,” he added.
Johansson, for his part, defends the practice of automatically creating thousands of articles.
“Once in a while some really obscure place ends up in the news – say there is a plane crash in some hamlet you never heard of… But since we can’t know in advance which hamlet will be in the news tomorrow, better make a stub about every single hamlet.”
guardian.co.uk © Guardian News and Media Limited 2010