groundzeros2015 7 hours ago

Paraphrasing: “My model was right up until… so it was pretty good.”

I would not draw any of the conclusions the author did. This 3 variable growth and decay model has nothing to do with what happened.

COVID was a good example to me of a “nerd trap” all these simulators, mapping tools, exponential graphs got built. Having access to data can give one an illusion of understanding or at least a distraction from what matters.

  • firesteelrain 6 hours ago

    Agree; there is little evidence these simulators did much beyond occupy people’s time for a while. I remember around 2022 in my Systems Engineering Masters someone wrote a similar simulator. I suppose it was inspired by this. It was a simulator that tried to simulate how infection spread in a cubicle setting.

    There is one positive thing despite RTO becoming a thing and that is remote work became more recognized even if its trending more hybrid.

  • lloydatkinson 5 hours ago

    I remember that constant stream of devs making “Covid graphs”. God was it fucking depressing and a bit cringe.

rob_c 6 hours ago

They're all awful. The ICL model used to "inform" the UK lockdowns had (probably still has) a serious race condition such that when running multi-threaded that meant all of the timelines had errors of +/-1week... (It's a miracle the code didn't crash)

After this was pointed out pandemic "planning" in the UK simply went from per-week to monthly plannings following the same broken model...

It still turned out to be crazily wrong and over predicted every, single, metric, by orders of magnitude that it was tasked with simulating.

Not too mention it couldn't load configs correctly. Work correctly on the national academic supercomputer. Or gracefully present any results/findings.

This was signed off _blindly_ by the cluster admins, academics, policy advisors and international "experts". And there was significant push back for over a week once this had been demonstrated that there must be a problem with the test methodology (simply running and *checking* the output multiple times). Ask me how I know there wasn't.

The whole field of pandemic modelling I'm sure has come on leaps and bounds in recent years, but it's a shocking sad truth most/all UG computing students with a 1st could have done a better job than these experts at the top of their field.

  • anonymousiam 6 hours ago

    The same observation (about the flawed models driving policy) also applies to climate simulations.

    • rob_c 5 hours ago

      Last time I sat down with one of the groups modelling national food availability their model _needed_ a scratch fs capable of dealing with >1M 4kB files per folder. When asked why not to use a db they replied databases don't work well with objects larger than 1kB in size and this would introduce network latencies into their code. Needless to say I walked away from that glad that I couldn't help.