Meta might let anti-vax posts back onto Facebook and Instagram
Source: https://www.theverge.com/2022/7/27/23280404/facebook-instagram-covid-antivax-misinformation-oversight-board-review

Today, let’s talk about a settled question that Meta has decided to re-open: what should the company do about misinformation related to COVID-19?

Since the earliest days of the pandemic, Meta has sought to remove false claims about the disease from Facebook and Instagram. And for just as long, the company has faced criticism that it hasn’t done a very good job. A year ago this month, asked about the role “platforms like Facebook” played in spreading misinformation about the disease, President Biden said “they’re killing people” — though he walked his statement back a day later.

Still, Biden voiced a fear that is deeply held among Meta critics: that the platform’s huge user base and algorithmic recommendations often combine to help fringe conspiracy theories reach huge mainstream audiences, promoting vaccine hesitancy, resistance to wearing masks, and other public health harms.

The pandemic is not close to over — an estimated 439 people died of COVID in the past day, up 34 percent in the past two weeks. And highly infectious Omicron subvariants continue to tear through the country, raising fears of a surge in cases of long COVID — a condition that experts say has already been “a mass disabling event.” An estimated 1 in 13 American adults reported having long COVID symptoms earlier this month, according to the US Centers for Disease Control and Prevention.

Despite that, Meta is now considering whether to relax some of the restrictions it has placed on COVID-related misinformation, including whether to continue removing posts about false claims about vaccines, masks, social distancing, and related subjects. It has asked the Oversight Board — an independent group funded by Meta to help it make difficult calls relating to speech — for an advisory opinion on how to proceed.

Nick Clegg, the company’s president of global affairs, explained Tuesday in a blog post:

In many countries, where vaccination rates are relatively high, life is increasingly returning to normal. But this isn’t the case everywhere and the course of the pandemic will continue to vary significantly around the globe — especially in countries with low vaccination rates and less developed healthcare systems. It is important that any policy Meta implements be appropriate for the full range of circumstances countries find themselves in.

Meta is fundamentally committed to free expression and we believe our apps are an important way for people to make their voices heard. But some misinformation can lead to an imminent risk of physical harm, and we have a responsibility not to let this content proliferate. The policies in our Community Standards seek to protect free expression while preventing this dangerous content. But resolving the inherent tensions between free expression and safety isn’t easy, especially when confronted with unprecedented and fast-moving challenges, as we have been in the pandemic. That’s why we are seeking the advice of the Oversight Board in this case. Its guidance will also help us respond to future public health emergencies.

For all the criticism Meta has received over its enforcement of health misinformation, by some measures the steps it took clearly had a positive effect on the platform. The company estimates it has taken down more than 25 million posts under its stricter policies, which now require the removal of 80 separate false claims about the disease and its vaccines.

At the same time, the platform arguably has at times overreached. In May 2021, I wrote about Meta’s decision to reverse an earlier ban on discussing the possibility that COVID-19 leaked from a Chinese lab. The company made that decision amidst a spike in hateful violence against Asian people, fearing that conspiracy theories related to the disease’s origin could be used to justify further attacks.

But as debate about the virus’ origin intensified, Meta began allowing people to speculate again. (To date, no consensus on the issue has emerged.) I wrote at the time that the company probably should not have taken a position on the issue in the first place, instead using its existing hate-speech policies to moderate racist posts:

I generally favor an interventionist approach when it comes to conspiracy theories on social networks: given the harm done by adherents to QAnon, Boogaloo, and other extremist movements, I see real value in platforms reducing their reach and even removing them entirely.

On some questions, though, platform intervention may do more harm than good. Banning the lab-leak hypothesis gave it the appearance of forbidden knowledge, when acknowledging the reality — that it is unlikely, but an open question — may have been just dull enough to prevent it from catching fire in those fever swamps.

Last week, I asked Clegg why the company had decided to ask the board for a second opinion on health misinformation now. One, he said, Meta assumes there will be future pandemics that bring with them their own set of policy issues. The company wants to get some expert guidance now so it can act more thoughtfully the next time around. And two, he said, the Oversight Board can take months to produce an opinion. Meta wanted to get that process started now.

But more than anything, he said, the company wanted a check on its power — to have the board, with which this month it signed a new three-year, $150 million operating deal, weigh in on what have been some fairly stringent policies.

“This was a very dramatic extension of our most exacting sanction,” Clegg told me. “We haven’t done it on this scale in such a short period of time before. … If you have awesome power, it is all the more important that you exercise that awesome power thoughtfully, accountably, and transparently. It would be curious and eccentric, in my view, not to refer this to the Oversight Board.”

Indeed, weighing in on policies like this is one of the two core duties of the board. The primary duty is to hear appeals from users who believe their posts should be restored after being removed, or taken down after being left up in error. When the board takes those cases, its decisions are binding, and Meta has so far always honored its findings.

The board’s other key duty is to offer opinions on how Meta ought to change its policies. Sometimes it attaches those opinions to decisions in individual cases; other times, as with the COVID policies, Meta asks the board about something. Unlike cases about single posts, the board’s opinions here aren’t binding — but to date, Meta has adopted roughly two-thirds of the changes the board has proposed.

Some people continue to write the board off anyway. Since even before it began hearing cases in 2020, the board has been subject to withering complaints from critics who argue that it serves as little more than a public-relations function for a company so beleaguered it had to change its name last year.

And yet it’s also clear that Meta and other social platforms have a profound need for the kind of rudimentary justice system a board like this can provide. In its first year, the board received 1.1 million appeals from Meta’s users. Before the board existed, they had no recourse when Facebook made a mistake beyond some limited automated systems. And every tough question about speech was ultimately made by one person — Mark Zuckerberg — with no room for appeal.

It seems obvious to me that a system where these cases are heard by an expert panel, rather than a lone CEO, is superior, even if it still leaves much to be desired.

So what happens now?

One possibility is that Meta’s policy teams want to relax restrictions on speech related to COVID policy, but want the cover that a decision from the Oversight Board would give them. They have reason to believe the board might come to that conclusion: it was stocked with free-speech advocates, and generally when they have ruled against Meta it has been in the name of restoring posts that the board believes were wrongfully removed.

That said, the company will also likely be in for a drubbing from left-leaning politicians and journalists, along with some number of users, if the board gives them the go-ahead to relax its policies and the company does so. Clegg told me that, should that happen, Facebook and Instagram would use other measures to reduce the spread of misinformation — adding fact-checks, for example, or reducing the distribution of false posts in feeds. But the mere existence of anti-vaxx content on Meta will lead to new criticism — and possibly new harms.

Another possibility is that the board won’t take the bait. Members could argue that removing health misinformation, while a drastic step, continues to be a necessary one — at least for now. The board remains relatively new, and mostly unknown to the general public, and I wonder what appetite members have to stand up for people’s right to spread lies about vaccines.

Whatever the board decides, Clegg said, Meta will move cautiously with any changes. At the same time, he said, the company wants to be judicious in how it deletes user posts.

“I think you should deploy the removal sanction very carefully,” he said. “You should set the bar really high. You don’t want private-sector companies to be removing stuff unless it really is demonstrably related to imminent, real-world harm.”



Source: https://www.theverge.com/2022/7/27/23280404/facebook-instagram-covid-antivax-misinformation-oversight-board-review

Leave a Reply

Your email address will not be published. Required fields are marked *