Saturday , December 4 2021

Facebook shouldn’t be all about metaverse right now

Facebook Inc is planning to change its name to something related to the metaverse, a new digital network for communicating through augmented and virtual reality, according to a report in The Verge, which cites a source with direct knowledge. Over the weekend, the company also said that as part of its metaverse-building efforts, it would hire 10,000 high-skilled jobs in Europe.
Putting aside the prospects of financial success with this big new platform, which don’t look good, Facebook’s hyper focus on the metaverse right now reflects poor judgement by its management and Mark Zuckerberg in particular.
Evidence is mounting that Facebook pushes older people towards conspiracy theories and teens towards body issues. Zuckerberg should be focused instead on carrying out the mother of all clean-up jobs: hiring thousands more staff, especially content moderators, to help target the harmful content on its site before building a new one that will deliver the same old problems.
Content moderators are contractors who scour Facebook and Instagram for potentially harmful content, and they are much cheaper than engineers. An entry-level engineer at Facebook in the UK earns about $125,000 a year, according to, which tracks Big Tech engineering salaries. Meanwhile, content moderators who work for Accenture plc, one of the biggest agencies doing Facebook’s clean-up work, earn about $37,000 a year, according to Glassdoor.
Facebook relies on roughly 15,000 content moderators to keep its site clean, and with the hiring budget it announced for the metaverse, it could more than double that number. This is exactly what a recent New York University study said Facebook should do to weed out harmful content.
In a separate blog post on Sunday, the company said its “improved and expanded AI systems” had led to a drop in hate speech, which now made up just 0.05% of content viewed on the site. (Facebook got that number by selecting a sample of content and then labeling how much violated its hate speech policies.) The company seems to be implying that it doesn’t need many more moderators because its technology is getting better at cleaning things up.
But these stats about harmful content, which Facebook shares in quarterly publications known as transparency reports, have a problem. Researchers have long been skeptical of such reports from Big Tech, according to Ben Wagner, an assistant professor at Delft University of Technology in the Netherlands, who co-wrote a study in February about their limitations.
He pointed out that the German government sued Facebook in 2019 for misleading regulators by, among other things, recording only certain categories of user complaints in data it was required to share with them. Facebook, which the government ordered to pay a 2 million-euro ($2.3 million) fine, said it had complied with Germany’s law on transparency and that some aspects of the law “lacked clarity.” It reserved the right to appeal.
Facebook faces other allegations of fudging its transparency report numbers. According to a Wall Street Journal story, which cited internal documents leaked by Facebook whistleblower Frances Haugen, Facebook changed its complaints process in 2019 by making it more difficult for people to flag content. Facebook told the Journal that this “friction” was intended to make its systems more efficient, and that it had since rolled some of that friction back.
With no common standards for measuring harm, social media transparency reports end up confusing and unclear. For instance, Facebook’s 2018 transparency report cited 1,048 complaints from users, while Twitter Inc and
Alphabet Inc’s YouTube each reported more than 250,000, according to the German lawsuit against Facebook. That’s a huge discrepancy in tracking.
And such reports aren’t properly audited. Facebook has set up a data transparency advisory panel of seven academics to make an “independent” assessment of its transparency reports, it said. The panel is, like other scientific advisory boards, paid a fixed honorarium by Facebook before its assessment, which makes it seem not so independent.
Still, this is one area where Facebook seems to have moved in the right direction. It recently hired Ernst & Young Global Ltd., one of the Big Four accounting firms, to assess how it measures harm, saying EY would start its audit some time in 2021. Set up correctly, that could create a more reputable chain of accountability than exists today. Facebook declined to answer questions about when the audit would be published, which criteria EY would apply, or which arm of EY would do the audit.
In the meantime, Facebook has to do more to improve its policing of harmful content. That’s why it and other social media sites should be pushed to hire more moderators — thousands more — to help clean up their sites. That’d be a better investment than rushing to build an entirely new digital reality platform like the metaverse, which is destined to have the same messes as the old platforms.


Parmy Olson is a Bloomberg Opinion columnist covering technology. She previously reported for the Wall Street Journal and Forbes and is the author of “We Are Anonymous.”

About Admin

Check Also

For banks, omicron is new word for ‘reset’

Since the summer, it’s not been a fun time to be a central banker: What ...

Leave a Reply

Your email address will not be published. Required fields are marked *