The glorious work of Dominic Cummings

:: politics, science, maths, computer

Or: the Cummings-Johnson effect.

I thought it would be interesting to get an idea of how many people will die because Dominic Cummings thought it was fine to ignore the lockdown rules, and Boris Johnson agreed with him. So I wrote a program to explore this Cummings-Johnson effect.

All the reasons you had to die

Jesus don’t want me for a sunbeam,
because sunbeams are not made like me,
and don’t expect me to cry,
for all the reasons you had to die,
don’t ever ask your love of me.

There are two ways that what Cummings did in March 2020 will probably be killing people:

  • he drove a long distance, presumably taking breaks1, while knowing he was infected with CV19;
  • now his actions are known, and now Johnson has supported them, other people’s behaviour will change.

The first of these is likely to have killed people, and still be killing people, by spreading the virus: for instance to the toilets in service stations2. The second of these is likely to kill people, and perhaps has done so already, because now it is general knowledge that Cummings & Johnson think that lockdown rules are for other people — for the little people, not people like them — then they will take lockdown and social distancing less seriously, and people will die as a result of that.

It’s this second way that they are killing people that I looked at.

Ths simulator described below is a toy: it’s very much a physicist’s ‘spherical cow’ model. It has no notion of locality for instance: infected individuals simply randomly pick other individuals to try to infect. The results it gives may be qualitatively reasonable, but if they are quantitively correct this is coincidence. The purpose of writing it, and of the runs described here, was simply to see if the Cummings-Johnson effect is visible, and to get some kind of qualitative estimate of how large it might be: if their actions will probably only kill only a few tens of people then they are doing no more harm than a common-or-garden mass murderers, while if their actions may kill thousands of people, then they’re working on a completely different scale.

Epidemic models which are far better than this exist. For instance the MRC Centre for Global Infectious Disease Analysis — Professor Neil Ferguson’s group — must have one. I would be very surprised if these people haven’t run much better versions of the scenarious I describe below. But the results of these runs don’t seem to have been published. This is sad but, perhaps, not surprising given what we know about Cummings & Johnson and their attitudes to facts which disagree with their fantasy worlds.

Still: if there are results from better models I would very much like to know them.

A mindless epidemic simulator

I wrote a very simple-minded simulator: it is unlikely to be realistic, it’s really a toy model. The results are unlikely to be quantitively correct, but they may be qualitatively interesting. In the model individuals go through the standard three phases:

  • initially they are uninfected & hence susceptible;
  • once they are infected they incubate the disease for \(t_l\) days, where \(t_l = 7\) in all the runs below
  • they are then infectious for \(t_i = 14\) days.
  • on each of these days, they randomly pick another individual, and if that individual is susceptible they infect them with a probability which is initially \(p_i = 0.14\).
  • at the end of the period they either die, with probability \(p_d = 0.01\), or they survive but become non-susceptible.

Additionally there may be a small ‘leakage’: every day, every susceptible person in the population can stand a small chance of becoming infected. This models the infection leaking in from abroad, for instance. In all the runs here the leakage \(p_l = 10^{-8}\).

Finally the initial number of seeds can be set, the idea being to start the simulation after a good few people have become infected to avoid too much uncertainty in the trajectory of the epidemic. By default \(n_s = n_p/1000\), where \(n_s\) is the number of seeds and \(n_p\) is the population size.

All of the parameters are adjustable as is how long to run for and what the stopping criteria are (with a leaky model things can keep on happening even after the number of infectious individuals reaches zero).

It is straightforward to computee \(R_0\) for this model: a person is infectious for \(t_i\) days and each day they stand a \(p_i\) chance of infecting another person if no-one is yet infected, so

\[ \begin{align} R_0 &= p_i t_i\\ &= 0.14 \times 14\\ &= 1.96 \end{align} \]

And then \(R\) declines over time as more people are removed from the population. When \(R < 1\) the epidemic dies out, more-or-less gradually, except for leaks causing occasional infections.

Source code for this model is not currently available, although it may be in future.

How the simulations run

All of \(t_l\), \(t_i\), \(p_i\), \(p_d\) and \(p_l\) can be adjusted during a run: the simulator is told to run for a few days, the values can then be adjusted and then it runs again for some given time. In practice the only parameter that I adjusted was \(p_i\): the probability of infection. Changing this during the run directly changes \(R_0\) and hence \(R\) and alters the course of the epidemic.

There is nothing in the model which prevents any of these parameters being adjusted dynamically, based on the current behaviour of the modelled epidemic. In fact I didn’t do that but instead set up ‘configuration sequences’ which are sequences of configurations where the parameters (in practice, just \(p_i\), as well as some reporting parameters) are changed at fixed times, between which the model simply runs.

Because there is inevitable variation between runs, the simulations get run several times, and the model also forks: if I wanted to look at the effect of changing parameters on, say, \(d = 120\), a single simulation is run to \(d = 119\) and then multiple copies are run on from then. This means that any variation before \(d = 120\) is removed from the forks, since they all come from the same simulation run. This process can happen recursively if need be.

Some example runs

Here are some simple cases which show the behaviour of the model.

Abandoning mitigation

Here is output for a model epidemic in which the mitigation is abandoned after 2 years:

Mitigated giving up after 2 years, cumulative deaths, population of 1 million

Mitigated giving up after 2 years, cumulative deaths, population of 1 million

This is the output of a 4 year run for a population of a model with

  • \(n_p = 10^6\);
  • \(p_i = 0.14\) initially;
  • \(p_l = 10^{-8}\)

For the unmitigated forks, \(p_i\) remains at its initial value.

For the completely mitigated forks

\[ p_i = \begin{cases} 0.14&d \lt 40\\ 0.06&120 \le d \lt 120\\ 0.08&120 \le d \lt 200\\ 0.06&d \ge 200 \end{cases} \]

For the ‘giving up’ forks

\[ p_i = \begin{cases} 0.14&d < 40\\ 0.06&120 \le d \lt 20\\ 0.08&120 \le d \lt 200\\ 0.06&200 \le d \lt 730\\ 0.14&d \ge 730 \end{cases} \]

In other words what this is showing is a scenario where there is no vaccine, but mitigation is abandoned, after about 2 years. Because some leakage happens, at some point after the mitigation is abandoned the epidemic takes off again and a lot of people die. Exactly when it takes off depends on chance, but in all 5 runs here it’s within about a year and a half.

Scaling the average results from this run to a population of 70 million3 results in the following figures, all to 3 significant figures:

  • 551,000 deaths for the unmitigated epidemic;
  • 40,300 deaths for the completely mitigated epidemic;
  • 535,000 deaths for the epidemic in which mitigation is abandoned on day 730.

For the mitigated epidemic this is somewhat lower than what the UK has so far seen, but it is in the right area: the model is clearly not hopeless. In later runs I adjusted the mitigation slightly to compensate for this (see below).

What these results make clear is that, unless there is a vaccine3, mitigation has to continue essentially indefinitely, or the epidemic will take off again.

Chancy runaways

Here are two runs which have an initial infected population, \(n_s = 0\): there are initially no infected people and the epidemic takes off due to leakage, with \(p_l = 10^{-8}\) as before.

Firstly for a population of a million:

Unmitigated, no seeds, cumulative deaths, population of 1 million, 10 runs

Unmitigated, no seeds, cumulative deaths, population of 1 million, 10 runs

Well, you can see that the epidemic takes off again after less than two years in all cases.

How likely this runaway is to happen in a given interval of time depends on the population size, as smaller populations experience fewer leakage events. Here is a run for a population of 10,000:

Unmitigated, no seeds, cumulative deaths, population of 10k, 10 runs

Unmitigated, no seeds, cumulative deaths, population of 10k, 10 runs

You can see that only one runaway happened in the three year simulation.

The Cummings-Johnson effect

To model this I started with an epidemic whose \(p_i\) values are initially:

\[ p_i = \begin{cases} 0.14&d < 40\\ 0.06&40 \le d <120\\ 0.08&120 \le d < 200\\ 0.06&200 \le d < 300\\ 0.08&300 \le d < 600\\ 0.07&d \ge 600 \end{cases} \]

All of the models run for 3 years, or 1095 days, and in addition the unmitigated epidemic is always plotted4. Each model ran 5 times and quoted figures are averages, scaled to a population of 70 million, to 3 significant figures

Cummings-Johnson on day 120

For this model

\[ p_i = \begin{cases} 0.14&d < 40\\ 0.06&40 \le d < 120\\ 0.08\times \left\{1.02, 1.05, 1.10\right\} &120 \le d < 200\\ 0.06\times \left\{1.01, 1.03, 1.06\right\} &200\le d < 300\\ 0.08\times \left\{1.005, 1.02, 1.04\right\} &300 \le d < 600\\ 0.07\times \left\{1.002, 1.01, 1.02\right\} &d \ge 600 \end{cases} \]

Where the triples of numbers represent the Cummings-Johnson effect causing weakening of social distancing of 2%, 5% and 10% respectively on day 120, with the weakening declining over time. Here are plots for this:

Cummings-Johnson on day 120, 2%, 5% and 10%, population of 1 million

Cummings-Johnson on day 120, 2%, 5% and 10%, population of 1 million

Here:

  • the brown curves are the normal courses of the epidemic with and without mitigation;
  • the blue curves are 2%;
  • the orange curves are 5%;
  • the red curves are 10%;

The figures are:

  • 551,000 deaths for the unmitigated epidemic;
  • 63,100 deaths for the mitigated epidemic;
  • 70,300 death for the 2% weakening;
  • 86,500 deaths for the 5% weakening;
  • 109,000 deaths for the 10% weakening.

Or in other words:

  • 7,200 additional deaths for 2% weakening;
  • 32,400 additional deaths for 5% weakening;
  • 45,900 additional deaths for 10% weakening.

These numbers seemed far too high to me. And I also suspect that the epidemic in my model happens more slowly (takes more simulated days) than the real one. So I ran three more models, with the Cummings-Johnson effect taking place at successively later times.

Cummings-Johnson on day 200

For this model

\[ p_i = \begin{cases} 0.14&d < 40\\ 0.06&40 \le d < 120\\ 0.08&120\le d < 200\\ 0.06\times \left\{1.02, 1.05, 1.10\right\} &200\le d < 300\\ 0.08\times \left\{1.01, 1.03, 1.06\right\} &300 \le d < 600\\ 0.07\times \left\{1.005, 1.02, 1.04\right\} &d \ge 600 \end{cases} \]

As you can see this allows the mitigated epidemic to run until day 200, when the same decaying effect happens. Here are plots for this:

Cummings-Johnson on day 200, 2%, 5% and 10%, population of 1 million

Cummings-Johnson on day 200, 2%, 5% and 10%, population of 1 million

Figures:

  • 546,000 deaths unmitigated;
  • 69,900 deaths mitigated;
  • 75,100 deaths 2%;
  • 93,700 deaths 5%;
  • 128,700 deaths 10%.

Excess deaths:

  • 5,200 2%;
  • 18,600 5%;
  • 53,600 10%.

This is a little better, but not much, and the 10% case is bizarrely bad.

Cummings-Johnson on day 300

For this model

\[ p_i = \begin{cases} 0.14&d < 40\\ 0.06&40 \le d < 120\\ 0.08&120\le d < 200\\ 0.06&200\le d < 300\\ 0.08\times \left\{1.02, 1.05, 1.10\right\} &300 \le d < 600\\ 0.07\times \left\{1.01, 1.025, 1.05\right\} &d \ge 600 \end{cases} \]

Here are plots for this:

Cummings-Johnson on day 300, 2%, 5% and 10%, population of 1 million

Cummings-Johnson on day 300, 2%, 5% and 10%, population of 1 million

Figures:

  • 551,000 deaths unmitigated;
  • 59,800 deaths mitigated;
  • 73,200 deaths 2%;
  • 90,000 deaths 5%;
  • 138,000 deaths 10%.

Excess deaths:

  • 13,400 2%;
  • 30,200 5%;
  • 78,200 10%.

All these figures are worse than the day 200 case, which think is because the big increase is happening when things are already too relaxed.

Cummings-Johnson on day 600

For this model

\[ p_i = \begin{cases} 0.14&d < 40\\ 0.06&40 \le d < 120\\ 0.08&120\le d < 200\\ 0.06&200\le d < 300\\ 0.08&300 \le d < 600\\ 0.07\times \left\{1.02, 1.05, 1.10\right\} &d \ge 600 \end{cases} \]

Here are plots for this:

Cummings-Johnson on day 600, 2%, 5% and 10%, population of 1 million

Cummings-Johnson on day 600, 2%, 5% and 10%, population of 1 million

Figures:

  • 546,000 deaths unmitigated;
  • 61,700 deaths mitigated;
  • 63,600 deaths 2%;
  • 68,500 deaths 5%;
  • 80,200 deaths 10%.

Excess deaths:

  • 1,900 2%;
  • 6,800 5%;
  • 18,500 10%.

These seem a little less frightening

Why is it so fierce?

I was really surprised by how large the differences are. I think part of the answer can be seen by looking at \(R\): at any point the progress of the epidemic goes something like \(e^{\alpha (R -1)t}\), where \(\alpha\) is some fudge factor. The only reason that the exponential runaway doesn’t continue is that \(R\) is a function not only of \(p_i\) but also of the proportion of people who are no longer susceptible. But if that proportion is low, which you very much want it to be, then everything is, more, or less, exponential, and really tiny changes in \(R\) can cause huge explosions.

To control the epidemic over any length of time you need to keep \(R = 1 - \epsilon\) where \(\epsilon \ll 1, \epsilon > 0\): you want to do this because the epidemic will die out so long as \(R < 1\), but the social and economic cost of keeping it significantlly below 1 for any length of time is enormous. And for an epidemic which has infected, and therefore killed, only a relatively small proportion of the population then \(R \approx R_0\). So the useful thing to look at is \(\ln R\) & \(\ln R_0\), as this shows small changes near \(R = 1, R_0 = 1\) which is where all the action is5.

Here’s a plot of \(\ln R\) and \(\ln R_0\) for the Cummings-Johnson on day 120 2% variant, and the mitigated version without the 2% bump:

ln R, ln R0, Cummings-Johnson on day 120, 2% and mitigated

ln R, ln R0, Cummings-Johnson on day 120, 2% and mitigated

Interestingly you can see that, for \(d \gtrapprox 500\) the Cummings 2% \(R\) is lower than the mitigated \(R\). But it’s significantly higher for \(d \in [120, 200)\) and somewhat higher for \(d \in [200, 300)\) (although less than 1 in the second interval).

So, well, very small changes for parameters in exponential processes can make very large differences: that should be obvious.

It certainly would be the case that runs with more principled values for things (for instance my ‘decaying Cummings-Johnson effect’ is pretty ad-hoc: it would be better to model it by having some increase which exponentially decays with time: \(p_i = p_{i0}e^{-(t - t_0)/\tau}\) as people forget, which would be easy to model. Maybe I will have a go at that in due course.

How many people will Cummings and Johnson kill?

I don’t know. This model is not adequate to give a numerically-correct answer by a long way: it’s full of assumptions, and is in any case an extremely oversimplified model6.

But I couldn’t get the number of people they will kill lower than 1,900, and I worked fairly hard to get it that low. I think my model is too sensitive, even though the numbers of people it kills for the mitigated epidemic are pretty reasonable and I did not fine-tune it for that, so I expect the real number will be somewhere between many hundreds and a few thousand. This is somewhere between mass murder and genocide7.

Did Cummings & Johnson do this deliberately? Probably not. Are these the only people they will kill, or even most the people they will kill, due to their ideological, careless and incompetent handling of the epidemic and other things? No. Would the harm have been reduced if Johnson had promptly sacked Cummings? Yes. Would the harm still be reduced if he were to sack him now? Yes. Will he sack him? Of course not. Do either of them care that they will kill a lot of people? Definitely not: the people they have killed and will kill are only little people, like ants.

This is the glorious work of Dominic Cummings, aided and abetted by his idiot stooge, Boris Johnson.

Don’t expect me to lie,
don’t expect me to cry,
don’t expect me to die for thee.


  1. He says he did not take breaks. This seems a deeply implausible claim given that he drove 260 miles with a small child in the car. 

  2. Which, again, he claims none of his family visited. 

  3. Another option is that the epidemic becomes globally extinct, when leakage would stop: this seems unlikely. 

  4. This is not really helpful as it makes the plots harder to read. 

  5. In my model I’m treating \(R_0\) as something you adjust via changes to \(p_i\), rather than a constant of the epidemic. \(R_0 = p_i t_i\), and I am adjusting \(p_i\). It would perhaps be better to say \(R_0 = p_{i,0}t_i\) and then define \(p_i = p_{i,0} - p_{i,m}\), where \(p_{i,m}\) is the parameter you adjust, and use that together with the proportion of people remaining susceptible to define \(R\): it doesn’t make any difference to what actually happens though. 

  6. I would be extremely interested in results about the Cummings-Johnson effect from more serious models. Please get in touch if you know of any. I am happy to sign nondisclosure agreements if need be. 

  7. Since we know that BAME people are disproportionately affected by CV19 this really is looking like genocide. Perhaps not a deliberate one, but I wonder how much Cummings & Johnson care that a bunch of BAME people will die because of their actions? Not much, I should think.