How will COVID-19 behave?
Additional waves of COVID outbreaks will occur. Epidemiologists and virologists are modeling how. The big debate is whether COVID-19 will bubble along through continual peaks and valleys like other Coronaviruses or whether there will be a seasonal break and then a major later second wave like the Spanish flu pattern. This paper is the basis for my modeling to date.
https://science.sciencemag.org/content/early/2020/04/13/science.abb5793?versioned=true
How much testing?
The Governor’s Dashboard calculates two testing density goals.
The first is the tests per day required to meet actual demand generated by a responsive testing infrastructure to detect disease in the field and the downstream series of tests required to treat the patient with established medical care pathways. This number is approximately 18.2M surveillance tests per day (mostly thermal, supplemented/replaced by viral home testing), 10.7M viral tests per day and 9.3M antibody tests per day (assuming antibodies confer some immunity and are initially useful in convalescent therapy).
The second is a theoretical absolute minimum number of tests required by epidemiologists to measure Ro, prevalence, asymptomatic rates, fatality rates… This number assumes a perfectly distributed grid of testing density using established random test protocol to statistically calculate critical variables. Epidemiology testing is helpful to create policy based on COVID 19’s true biology. A single data point to test prevalence is very helpful to an epidemiologist, but has little to do with testing levels demanded by people in the trenches dealing with disease and death all around them trying to control a viral outbreak. Epidemiologists help by ensuring some expansion of coverage density, but there needs should only be used as an initial starting point for using the Governors Dashboard.
The minimal testing requirements of epidemiologists is 500,000 to 700,000 tests per day – 20 x less than what is truly needed to create a decision trigger for turning on and off an entire economy. When deciding on a density ask yourself, “If I got sick with COVID, would I rather be treated by an epidemiologist or a medical doctor” If you chose physician, then you would want the higher recommended testing density.
https://www.nytimes.com/interactive/2020/04/17/us/coronavirus-testing-states.html
Vaccine is our hope
Vaccines are the hope we have of returning to a normal life again. The other drugs in development help, but vaccines are the ones to watch. I have tracked these races for 40 years – it is stressful! You can relax until the first announcement of a true phase 3 trial is started – maybe in September, more likely in November. Until phase 3, the vast majority of these drugs will fail (over 5 in 6).
At phase 3, there is a 70% probability of success and a 12 month timeline is theoretically possible, but not probable and not for a full license or at global scale – it takes time to ensure the vaccine is safe when it is needed by so many diverse populations at global scale. Master protocols, fast-track designation from FDA, adaptive trial designs, ring design for tracing, concurrent scale up and parallel long term animal and human safety testing can reduce development times. Still, it is hard coordinate multicenter, global trials – the subjects, investigators, disease and drug supply must all come together. Then results must be analyzed, reported and submitted to the regulators.
Watch especially pharmacovigilance and efficacy results that might limit the vaccine’s use to an annual, partially effective application or would limit the populations inoculated. Seeing a success in less than 19 months is hopeful. If the first crop of vaccines fail, then the timeline to the proven vaccine platforms, then likely vaccine development timelines must be extended to 5 or 10+ years.
https://www.nature.com/articles/d41573-020-00073-5
Interested in modeling?
This article outlines the original algorithms used to control the H5N1 virus. Kudos to Kevin Systrom and team for picking it up and posting their work to GitHub. It shocked me (and Luis) that they would post it. Great decision.
(For Luis Code – attached.)
Please use this code to upgrade the Rt model for COVID. The original model uses a 7 day rolling window to compute Rt. The nature of the COVID data sources vs H5N1’s caused some stability issues (I blew up our entire infrastructure and my family won’t speak to me). This model cuts the window to 4.5 days and adds an annealing function that removes outliers from the analysis while retaining them in the data for further analysis. Elegant coding by Luis and his team – Try it out (KS)! Posted on GitHub, 4/20
https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0002185
The mathematical models
This article is provides the mathematical basis for the transmission models and infectiousness levels that I use. With this model, we can then quantify the effectiveness of our interventions. COVID 19’s asymptomatic transmission levels sustain community spread. Unfortunatley, these models demonstrate that without contact tracing, our best distancing efforts will hover at the Rt~1 levels. To contain the virus, we must contact trace aggressively – preferrably automatically.
https://science.sciencemag.org/content/368/6491/eabb6936
Scenario planning
This is what we are working on at a high level to support scenario planning across epidemiologic models. We are moving from statistical SIR models to SEIRS mechanistic models . Note how much more powerful the mechanistic models become. In time, we will be able to constrain critical variables to observed ranges and better predict the impact of our interventions. This is how we will determine best economic vs disease trade offs as we move forward to normalcy. Play and have fun!