The Failures of an Unduly Influential COVID-19 Model

Breaking News

[ad_1]

Professor Neil Ferguson, who led the COVID-19 modeling team at Imperial College in London, resigned May 5 from his government advisory role after breaking the very same British lockdown rules that he had a role in influencing.

Ferguson led the Imperial College team that designed the computer model that, among others, had been used to justify the recent stay-at-home orders in England as well as in the United States. We now know the model was so highly flawed it never should have been relied upon for policy decisions to begin with.

Epidemiology—the study of the incidence, prevalence, and impact of disease—frequently calls upon models to forecast potential outcomes of diseases. Not surprisingly, once COVID-19 became a pandemic, policy experts from all across the world began relying on such models.

You Might Like

>>> When can America reopen? The National Coronavirus Recovery Commission, a project of The Heritage Foundation, is gathering America’s top thinkers together to figure that out. Learn more here.

The Imperial College researchers ran one such model they had used in prior research and forecast a number of potential outcomes, including that, by October, more than 500,000 people in Great Britain and 2 million people in the U.S. would die as a result of COVID-19.

The model also predicted the United States could incur up to 1 million deaths even with “enhanced social distancing” guidelines including “shielding the elderly.” Imperial’s modeling results influenced British Prime Minister Boris Johnson to impose a nationwide lockdown and influenced the White House as well.

I asked Ferguson and his colleagues for their model on multiple occasions to see how they got their numbers, but they never replied to my emails. According to Nature, they had been “ … working with Microsoft to tidy up the code and make it available.” I also asked the U.S. Centers for Disease Control and Prevention for the codes it used to develop its COVID-19 forecasts, but got no response.

So, my colleague Norbert Michel and I decided to take a publicly available COVID-19 epidemiological model and forecast the prevalence and mortality of the disease under a variety of plausible scenarios.

The results varied, depending on the assumptions we made about mortality rates within hospital intensive care units, asymptomatic rates, and the specification of the R0 (pronounced R-naught) value, which measures how easily the virus spreads.

We found mortality rate predictions can be quite variable depending on the age and comorbidities of those contracting the virus. Under varying assumptions regarding the ICU mortality rate between 5% and 30%, we found that predicted mortality because of the disease could range from near 78,000 deaths to 810,000 deaths in the U.S. by Aug. 1.

Recent testing data indicates that the asymptomatic rate for COVID-19 is likely not trivial, and data from Iceland indicates this rate can be as high as 50%. Assuming an asymptomatic rate ranging from 15% to 55%, one can project deaths in the U.S. of between 118,000 and 394,000 by Aug. 1.  

Lastly, we looked at the model’s assumption about the virus’s basic reproductive number, the aforementioned R0 value. Popularized in the 2011 movie Contagion, the R0 value quantifies the average number of people an infected person will spread the virus to.

Under assumptions of the R0 value ranging from 1.5 to 3.5—plausible estimates based on medical research as discussed in our paper—the model predicted from 44,000 dead to 1.1 million dead by Aug. 1 in the U.S.

According to the Johns Hopkins coronavirus tracker, we are currently over 83,000 deaths, which exceeds our lower-end estimates. But the point our research made is that these types of models produce many plausible scenarios, depending on reasonable assumptions.

As we learn more about the virus, it is imperative to continue to update the assumptions used in these models.

After we had published our work, news surfaced that Microsoft had actually made some headway in making the Imperial College team’s model available. But the codes it released are a highly modified version of what the Imperial team actually used. And it turns out, the model has serious flaws, which an ex-software engineer from Google discusses at length in his blog.

The Imperial College code provides different answers using the same inputs. In particular, the same assumptions can provide results that differ by 80,000 deaths over a span of 80 days. The software engineer has noted there are apparently myriad other problems as well—including undocumented codes and numerous bugs.

This isn’t the first time bad models have made their way into policy. As we discussed in our work, statistical models can be useful tools for guiding policy, but they are only as credible as the assumptions on which they are based.

It is fundamentally important for models used in policy to be made publicly available, have assumptions clearly stated, and have their robustness to changes to these assumptions tested. Models also need to be updated as time goes on in line with the best available evidence.

Bottom line: The Imperial College model didn’t meet any of these criteria. And sadly, its model was one of the inputs relied on as the basis for locking down two countries.

The codes we used at Heritage are available here. Our assumptions are clearly stated in our paper here.

[ad_2]

Read the Original Article Here

Articles You May Like

This State Lawmaker Sues to Take Down Biden’s Election-Meddling Executive Order
Another Reason to Say ‘No’ to Commercial Surrogacy
The Associated Press Rushed to Politicize Historic Figure Francis Scott Key After Bridge Collapse
Biden Judge Pick Tries to Weather Democrat Defections Over Anti-Israel, Defund-Police Ties
BIDENOMICS: Home Foreclosures Rising Nationwide – By 50 Percent or More in Some States

Leave a Reply

Your email address will not be published. Required fields are marked *