Texas Attorney General Ken Paxton sued Pfizer last week, claiming the pharmaceutical giant "deceived the public" by "unlawfully misrepresenting" the effectiveness of its mRNA COVID-19 vaccine and sought to silence critics.
The lawsuit also blames Pfizer for not ending the pandemic after the vaccine's release in December 2020. "Contrary to Pfizer’s public statements, however, the pandemic did not end; it got worse" in 2021, the complaint reads.
"We are pursuing justice for the people of Texas, many of whom were coerced by tyrannical vaccine mandates to take a defective product sold by lies," Paxton said in a press release. "The facts are clear. Pfizer did not tell the truth about their COVID-19 vaccines."
In all, Paxton's 54-page complaint acts as a compendium of pandemic-era anti-vaccine misinformation and tropes while making a slew of unsupported claims. But, central to the Lone Star State's shaky legal argument is one that centers on the standard math Pfizer used to assess the effectiveness of its vaccine: a calculation of relative risk reduction.
This argument is as unoriginal as it is incorrect. Anti-vaccine advocates have championed this flawed math-based theory since the height of the pandemic. Actual experts have roundly debunked it many times. Still, it appears in all its absurd glory in Paxton's lawsuit last week, which seeks $10 million in reparations.
Math argument
Briefly, the lawsuit and the anti-vaccine rhetoric before it argues that Pfizer should have presented its vaccine's effectiveness in terms of absolute risk reduction rather than relative risk reduction. Doing so would have made the highly effective COVID-19 vaccine appear extremely ineffective. Based on relative risk reduction in a two-month trial, Pfizer's vaccine appeared 95 percent effective at preventing COVID-19—as Pfizer advertised. Using the same trial data but calculating absolute risk reduction, however, the vaccine effectiveness would have been 0.85 percent.
The difference between the two calculations is quite simple: Absolute risk reduction is a matter of subtraction—the percentage point drop in risk of a disease between an untreated and treated group. So, for example, if a group of untreated people have a 60 percent risk of developing a disease, but, when treated, the risk drops to 10 percent, the absolute risk reduction is 50 percent (60 – 10 = 50).
Relative risk reduction involves division—the percent change difference between the two groups' risks. So, as in the previous example, if a treatment lowers disease risk from 60 percent to 10 percent, the relative risk reduction is 83 percent (50 percent drop / 60 percent initial risk = ~0.83).
Both numbers can be helpful when weighing the risks and benefits of treatments, but absolute risk reduction is particularly key when there's a low risk of a disease. This can be easily understood by simply moving the decimal in the above example. If a treatment lowers a person's disease risk from 6 percent to 1 percent, the relative risk reduction is still 83 percent—an impressive number that might argue for treatment—but the absolute risk reduction is a paltry 5 percent—which could easily be negated by any potential side effects or high costs.