Imperial College model used to justify UK and U.S. lockdowns deemed ‘buggy mess’ & ‘total unreliable’ by experts
One expert’s damning assessment: “In our commercial reality, we would fire anyone for developing code like this and any business that relied on it to produce software for sale would likely go bust.”
Last week our update included the report that Dr. Neil Ferguson, who developed the Imperial College Model predicting the spread of the Wuhan coronavirus, had resigned from his government position.
The reason for his departure was thought to be that he was discovered to be violating quarantine orders to see his mistress. However, it turns out there may be more motivation than a romantic affair.
The model United Kingdom experts, as well as many others around the world, have largely used to guide their coronavirus policies has been deemed “totally unreliable” by experts. To start with, the Daily Telegraph‘s report on the assessments done by technology professionals is damning.
The model, credited with forcing the Government to make a U-turn and introduce a nationwide lockdown, is a “buggy mess that looks more like a bowl of angel hair pasta than a finely tuned piece of programming”, says David Richards, co-founder of British data technology company WANdisco.
“In our commercial reality, we would fire anyone for developing code like this and any business that relied on it to produce software for sale would likely go bust.”
…The Imperial model works by using code to simulate transport links, population size, social networks and healthcare provisions to predict how coronavirus would spread. However, questions have since emerged over whether the model is accurate, after researchers released the code behind it, which in its original form was “thousands of lines” developed over more than 13 years.
In its initial form, developers claimed the code had been unreadable, with some parts looking “like they were machine translated from Fortran”, an old coding language, according to John Carmack, an American developer, who helped clean up the code before it was published online. Yet, the problems appear to go much deeper than messy coding.
Other scientists have also found troubling problems with the model as well:
Scientists from the University of Edinburgh have further claimed that it is impossible to reproduce the same results from the same data using the model. The team got different results when they used different machines, and even different results from the same machines.
“There appears to be a bug in either the creation or re-use of the network file. If we attempt two completely identical runs, only varying in that the second should use the network file produced by the first, the results are quite different,” the Edinburgh researchers wrote on the Github file.
A fix was provided, but it was the first of many bugs found within the program.
“Models must be capable of passing the basic scientific test of producing the same results given the same initial set of parameters…otherwise, there is simply no way of knowing whether they will be reliable,” said Michael Bonsall, Professor of Mathematical Biology at Oxford University.
It is hard to overstate how important the Imperial College model was in altering the approach to the coronavirus being taken by both the U.S. and Britain at that time.
In fact, just prior to the model’s pronouncement that 2.2 million people would die from coronavirus with no mitigation in this country and 1.1 million with mitigation, the White House was using the protective approaches applicable to a severe flu strain.
As a reminder, it is an approach I highlighted in early March. If we had continued with the “severe flu model,” including the decontamination of commonly touched surfaces (like those in subways) and promoting vitamins (like Vitamin D, which seems to be protective against coronavirus), it appears the severe flu model would have offered all the protections without the economy-crushing side effects.
Perhaps the best fisking of the code comes from viriologist and computational epidemiologist Chris von Csefalvay, who has issues with the use of 13-year old code for such an critical project. Specifically, von Csefalvay called the use of the code “somewhere between negligence and unintentional but grave scientific misconduct.”
First of all, the elephant in the room: code quality. It is very difficult to look at the Ferguson code with any understanding of software engineering and conclude that this is good, or even tolerable. Neil Ferguson himself attempts a very thin apologia for this:
I’m conscious that lots of people would like to see and run the pandemic simulation code we are using to model control measures against COVID-19. To explain the background – I wrote the code (thousands of lines of undocumented C) 13+ years ago to model flu pandemics…
— neil_ferguson (@neil_ferguson) March 22, 2020
That, sir, is not a feature. It’s not even a bug. It’s somewhere between negligence and unintentional but grave scientific misconduct.
For those who are not in the computational fields: “my code is too complicated for you to get it” is not an acceptable excuse. It is the duty of everyone who releases code to document it – within the codebase or outside (or a combination of the two). Greater minds than Neil Ferguson (with all due respect) have a tough enough time navigating a large code base, and especially where you have collaborators, it is not unusual to need a second or two to remember what a particular function is doing or what the arguments should be like.
Or, to put it more bluntly: for thirteen years, taxpayer funding from the MRC went to Ferguson and his team, and all it produced was code that violated one of the most fundamental precepts of good software development – intelligibility.
The entire post by von Csefalvay is worthy reading. His conclusion about the public’s view of epidemiology in the wake of this disastrous miscalculation is spot on.
There will no doubt be public health consequences to the loss of credibility the entire profession has suffered, and in the end, it’s all due to the outdated ‘proprietary’ attitudes and the airs of superiority by a few insulated scientists who, somehow, somewhere, left the track of serving public health and humanity for the glittering prizes offered elsewhere. With their abandonment of the high road, our entire profession’s claim to the public trust might well be forfeited – in a sad twist of irony, at a time that could well have been the Finest Hour of computational epidemiology.
Based on these new determinations, it seems clear that the model we should be relying on is common sense: Practice good hygiene and open up the economy.DONATE
Donations tax deductible
to the full extent allowed by law.