This article is more than 1 year old

Automatic for the People: Pandemic-fueled rush to robo-moderation will be disastrous – there must be oversight

EFF raises alarm over increasing reliance on shoddy automation

Analysis The Electronic Frontier Foundation on Thursday warned that the consequences of the novel coronavirus pandemic – staff cuts, budget cuts, and lack of access to on-site content review systems, among others – have led tech companies to focus even more resources on barely functional moderation systems.

Technology platforms have tended to favor automated content moderation over human editorial oversight. The results of such algorithmic policing have been imperfect but, more importantly to those implementing these systems, inexpensive compared to salaried employees or underpaid contractors.

Though most of the major tech companies involved in overseeing user-generated posts have been celebrating machine learning for years now, the EFF frets that AI-driven moderation has been talked up a bit too much lately. The advocacy group points to recent public statements by Facebook, Twitter, and YouTube that cite increased reliance on automated tools for content moderation.

Two weeks ago, for example, Twitter said it was increasing its use of machine learning and automation, even as it noted that such systems "sometimes lack the context that our teams bring, and this may result in us making mistakes."

The EFF's concern is that automation doesn't work at scale and harms free expression when legitimate content gets blocked. "In short, automation is not a sufficient replacement for having a human in the loop," said Jillian C. York, director for international freedom of expression, and Corynne McSherry, legal director, in a blog post.

Having people in the loop doesn't necessarily help: Cloudflare's DNS filtering service for families misfired when it debuted on Wednesday because the service was initially configured with the wrong filtering set. But at least there's someone to blame.

AI

Are machine-learning-based automation tools good enough for storage management and other areas of IT? Let us know

READ MORE

While the EFF credits tech firms for admitting they need human review for some types of content, the advocacy group expects algorithmic moderation will continue to err and would like companies to commit to the Santa Clara Principles, a set of best practices developed by industry and academia that focus on transparency, notice, and an appeals process for content takedown decisions.

York and McSherry allow these might not be enough and hint at future efforts to expand these principles.

In an interview with The Register, Frank Pasquale, law professor at the University of Maryland in the US, and author of The Black Box Society: The Secret Algorithms That Control Money and Information, said he agreed that people should have due process protection against automated decision making, as a matter of ethics.

"It's important for people to be able to appeal decisions," he said, noting that the companies deploying automated systems constantly over-claim what the technology can accomplish.

However, he expressed skepticism that the Santa Clara Principles represent the optimal solution for dealing with automated content moderation.

He pointed to the Digital Millennium Copyright Act, noting that while internet freedom advocates don't like it, the law does have a model for handling takedown requests.

Different industry sectors, he suggested, benefit from different approaches. "I think it wise to have different principles in different areas," he said. "We have a lot of people who see the world through an internet lens. It's hard to create any principles that govern everything."

Automated systems do some things well, he said, pointing to moderation schemes that flag known child abuse images through digital signatures. But the point of automation isn't primarily to do the job better, he suggested.

"The turn to automation is mainly driven by avoiding labor costs and secondarily driven by a desire to do the job better," he said.

Pasquale agrees that any rational, fair system has to balance algorithmic decisions with human ones. "You can use automation to flag things and delay their posting but you want to have a person who is ultimately responsible for that," he said. ®

More about

TIP US OFF

Send us news


Other stories you might like