“At that scale, even if you’re 99 percent accurate, you’re going to have a lot of mistakes.”
According to the New York Times, some 7,500 moderators spread across the globe are responsible for policing Facebook’s content.
No system is without its faults and for Facebook, the faults amount to censoring political speech in politically charged, even dangerous parts of the world.
The inside scoop, as reported by the NYT:
Every other Tuesday morning, several dozen Facebook employees gather over breakfast to come up with the rules, hashing out what the site’s two billion users should be allowed to say. The guidelines that emerge from these meetings are sent out to 7,500-plus moderators around the world.
The closely held rules are extensive, and they make the company a far more powerful arbiter of global speech than has been publicly recognized or acknowledged by the company itself, The New York Times has found.
The Times was provided with more than 1,400 pages from the rulebooks by an employee who said he feared that the company was exercising too much power, with too little oversight — and making too many mistakes.
An examination of the files revealed numerous gaps, biases and outright errors. As Facebook employees grope for the right answers, they have allowed extremist language to flourish in some countries while censoring mainstream speech in others.
Moderators were once told, for example, to remove fund-raising appeals for volcano victims in Indonesia because a co-sponsor of the drive was on Facebook’s internal list of banned groups. In Myanmar, a paperwork error allowed a prominent extremist group, accused of fomenting genocide, to stay on the platform for months. In India, moderators were mistakenly told to take down comments critical of religion.
The Facebook employees who meet to set the guidelines, mostly young engineers and lawyers, try to distill highly complex issues into simple yes-or-no rules. Then the company outsources much of the actual post-by-post moderation to companies that enlist largely unskilled workers, many hired out of call centers.
Those moderators, at times relying on Google Translate, have mere seconds to recall countless rules and apply them to the hundreds of posts that dash across their screens each day. When is a reference to “jihad,” for example, forbidden? When is a “crying laughter” emoji a warning sign?
Moderators express frustration at rules they say don’t always make sense and sometimes require them to leave up posts they fear could lead to violence. “You feel like you killed someone by not acting,” one said, speaking on the condition of anonymity because he had signed a nondisclosure agreement.
Facebook executives say they are working diligently to rid the platform of dangerous posts.
Which continues to present a problem of its own. What, exactly, is a “dangerous post”?
“It’s not our place to correct people’s speech, but we do want to enforce our community standards on our platform,” said Sara Su, a senior engineer on the News Feed. “When you’re in our community, we want to make sure that we’re balancing freedom of expression and safety.”
Monika Bickert, Facebook’s head of global policy management, said that the primary goal was to prevent harm, and that to a great extent, the company had been successful. But perfection, she said, is not possible.
“We have billions of posts every day, we’re identifying more and more potential violations using our technical systems,” Ms. Bickert said. “At that scale, even if you’re 99 percent accurate, you’re going to have a lot of mistakes.”
Facebook’s problems seem to be growing and quickly.
Just recently, it was revealed that Facebook gave large corporations access to user data, like private messages. These corporations were allowed to read, write, and delete private messages while users were never aware. Additionally, the report indicated Facebook might be a lot more chummy with Russian state interests than they’ve let on.DONATE
Donations tax deductible
to the full extent allowed by law.