What 30,000 People and One Weekend Taught Me About Risk
The crisis wasn't the surprise. The surprise was that we'd seen it coming all along. A field report from a Saturday morning I'd rather forget — and what it took to stop calling it luck.
The crisis wasn’t the surprise. The surprise was that we’d seen it coming all along.
There is a photograph somewhere on my phone that I took at 11:47am on a Saturday morning. It is a picture of mud. Specifically, it is a picture of my boot, standing in what was, about sixteen hours earlier, a perfectly acceptable piece of ground, and was now doing a very convincing impression of a swamp.
Behind me, on the other side of a fence that suddenly felt very thin, 10,000 people were waiting to come in.
In thirty minutes, those gates were supposed to open. I was standing in the operational compound with a radio in one hand and a weather app in the other, looking at a site that the wind and rain had spent the previous twelve hours systematically dismantling. Not dramatically. Not in a way that would make the news. But in the quiet, grinding way that turns a production plan into a suggestion and a timeline into a fiction.
The question on the table was simple: do we open?
The answer was not.
Part One: The Bit Where I Explain That The Problem Wasn’t The Weather
Let me be clear about something from the start. This is not a story about weather. Weather is what happened. The story is about what we did in the months and weeks before the weather happened, and what that tells you about how most of us think about risk.
Because here is the uncomfortable truth that I have spent fifteen years learning in various painful ways: the crisis is almost never the surprise. The surprise is that you saw it coming and talked yourself out of taking it seriously.
The sociologist Diane Vaughan has a term for this. She calls it the normalisation of deviance. She coined it while studying the Space Shuttle Challenger disaster, and it describes the process by which clearly unsafe practices gradually become accepted as normal within an organisation, usually because nothing catastrophic has happened yet.
The Challenger’s O-ring seals had shown damage on previous flights. Engineers flagged it. Managers were aware. But because the shuttle had launched successfully despite the damage, the damage itself was reclassified from ‘unacceptable risk’ to ‘acceptable anomaly’. The problem didn’t suddenly appear on January 28th, 1986. It had been there all along. What changed was that the other critical factors finally lined up. (Vaughan, The Challenger Launch Decision, 1996)
My take: I read Vaughan’s book about three years ago and I have not stopped thinking about it since. Not because I run a space programme, obviously. But because I sit in production meetings every week where the same mechanism plays out in miniature. ‘The barrier fence was a bit wobbly last time but it held.’ ‘We didn’t have enough toilets but nobody complained.’ ‘The drainage wasn’t great but it didn’t rain.’ Each of those sentences contains a tiny normalisation. Each one makes the next acceptable. And then one day it rains.
Part Two: The Bit Where The Weather App Becomes A Metaphor
I had been watching the forecast for ten days.
On the Tuesday before the event, there was a 40% chance of rain. That number changed a lot over the following days, as forecasts do, but the wind speed projections were consistent and concerning. By Thursday evening the picture was clear: sustained high winds on Friday, with heavy rain arriving overnight.
This is the point where I need to be honest about something. I knew. We all knew. The forecast was not ambiguous. It was not a coin flip. It was a clear signal that the conditions on site were going to be challenging and that the ground, which was already softer than ideal, was going to deteriorate significantly.
And yet, for about forty-eight hours, the dominant energy in the room was optimism. Not reckless optimism. Not willful ignorance. Just the very human, very understandable hope that it might not be as bad as predicted. That the rain might pass through faster. That the wind might drop. That we’d get lucky.
Daniel Kahneman would recognise this immediately. In Thinking, Fast and Slow he describes how people consistently overweight their hopes and underweight base rates when making decisions under uncertainty. We are, as a species, remarkably good at finding reasons why the thing we don’t want to happen probably won’t.
The weather app wasn’t a metaphor at the time. It was just a screen I kept refreshing. But with hindsight, it represents something bigger: the gap between the information you have and the decisions you make based on that information. Because the data was right there. It was accurate. It was available to everyone involved. The problem was never a lack of information. The problem was what we chose to do with it.
Part Three: The Bit Where I Tell You About A Saturday Morning I’d Rather Forget
I got to site at 5am. The overnight crew had done what they could. The wind had been relentless. Several structures that had been perfectly stable at close of build the previous afternoon were now demonstrably not. Ground conditions had gone from ‘soft in places’ to ‘actively hostile’.
Between 5am and 10am, the team made more consequential decisions than most businesses make in a quarter. Each one mattered. Each one was being made by people who had slept badly, hadn’t eaten properly, and were already running on adrenaline.
There is a famous study by Shai Danziger and colleagues, published in Proceedings of the National Academy of Sciences in 2011, which analysed 1,112 parole decisions made by Israeli judges over a ten-month period. They found that the probability of a favourable ruling dropped from approximately 65% at the start of a session to nearly 0% by the end, before resetting after a food break. The quality of consequential decisions degrades measurably when the people making them are depleted. (Danziger, Levav & Avnaim-Pesso, 2011)
My take: Nobody in the events industry talks about this. We celebrate the ability to ‘make it work’ under pressure. We lionise the production manager who hasn’t slept and is still making calls at 6am. And look, I’ve been that person. I’ve worn the sleeplessness as a badge of honour. But the science is pretty clear that the version of me making decisions at 5am after three hours of broken sleep is a materially worse decision-maker than the version of me who had a full night’s rest and a proper breakfast. We don’t like admitting that because it undermines the mythology of the industry. But it’s true.
At 12:30pm, thirty minutes before gates, we made the call to delay opening by thirty minutes. Not to cancel. Not yet. To delay, reassess, and make a final decision at 1:00pm.
That was the right call. But I want to be honest about how close the other call was. The cancellation call. The one where you tell 10,000 people who have bought tickets, travelled, booked hotels, arranged babysitters, that the thing they came for isn’t happening. That call was fifty-fifty. It was on the table, in the room, being actively discussed.
We opened. The event went ahead. The audience had a good time. Most of them never knew.
Part Four: The Bit Where I Stop Telling The Story And Start Making The Point
Here is what I want you to take from this.
The crisis on that Saturday morning was not caused by the weather. The weather was a variable we knew about a week in advance. The crisis was caused by an accumulation of small optimisms in the days before. Each one was individually reasonable. Collectively, they left us with less margin than we should have had.
A slightly more robust ground protection spec would have made the site more resilient. Positioning certain structures differently would have reduced wind exposure. Having a clearer trigger point for the delay decision would have bought us more time. None of these things were dramatic. None of them were expensive. All of them were knowable in advance.
The psychologist Gary Klein developed a technique called the pre-mortem in 2007. Where a post-mortem asks ‘what went wrong?’, a pre-mortem asks the team to imagine the project has already failed and work backwards to identify why. Research published in collaboration with Veinott and Wiggins found that this approach increased teams’ ability to identify risks by approximately 30% compared to standard planning. (Klein, Harvard Business Review, 2007; Veinott, Klein & Wiggins, 2010)
The reason the pre-mortem works is because it changes the social dynamics in the room. In a standard planning meeting, raising concerns makes you the pessimist. Nobody wants to be the person who slows everything down by saying ‘but what if it rains and the ground gives way?’ In a pre-mortem, identifying problems makes you the smartest person in the room. Same information. Different frame. Completely different outcome.
My take: We have started running pre-mortems on every event we produce. Every single one. It takes thirty minutes. You tell the team: ‘It’s Monday morning after the event. It was a disaster. Tell me what went wrong.’ And the things that come out of people’s mouths in that session are, without fail, the things that nobody said in the six months of production meetings beforehand. Not because they didn’t think them. Because the meeting structure didn’t give them permission to say them.
Part Five: The Bit Where This Stops Being About Events
I work in live events. That’s my context. But the mechanism I’m describing is not specific to my industry. It is universal.
Every time someone in a meeting says ‘that probably won’t happen’ and everyone nods, the normalisation of deviance is happening. Every time a risk register gets filled in as a compliance exercise rather than a genuine planning tool, it’s happening. Every time a team confuses ‘it was fine last time’ with ‘it will be fine next time’, it’s happening.
Vaughan’s analysis of NASA is often summarised as a story about institutional arrogance. It isn’t. It’s a story about perfectly reasonable people making perfectly reasonable decisions that, in aggregate, created the conditions for catastrophe. Nobody at NASA woke up and chose to be reckless. They chose to be normal. And normal, over time, had drifted.
Richard Feynman, who sat on the Rogers Commission investigating the Challenger disaster, put it as sharply as anyone ever has: “For a successful technology, reality must take precedence over public relations, for nature cannot be fooled.”
Replace ‘technology’ with ‘event’ and ‘public relations’ with ‘optimism’ and you have a sentence I would quite like tattooed on the inside of my eyelids.
My take: The events industry has a complicated relationship with honesty about risk. We operate in a world where the whole point is to create something that feels effortless and magical, which means the last thing anyone wants to talk about publicly is the seventeen ways it nearly went wrong. But that reluctance to discuss near-misses openly is precisely the mechanism by which deviance normalises. If nobody talks about the time the ground nearly failed, then next time the ground starts to soften, there’s no institutional memory that says ‘this is how it starts’. The silence is the risk.
Part Six: The Bit Where I Try To Be Useful Rather Than Just Reflective
So what do you actually do with this? Here’s what I’ve learned, mostly the hard way.
Run the pre-mortem. Every time. Thirty minutes. Before every event, every project, every significant decision. Imagine it’s failed. Work backwards. Write it down. The things your team says in that room will save you.
Build trigger points, not judgment calls. ‘We’ll assess on the day’ is not a plan. ‘If wind speed exceeds X at Y time, we execute Z’ is a plan. The moment of crisis is the worst possible time to be designing your response. Do it in advance, when you’re rested, fed, and not standing in mud at 5am.
Protect the decision-makers. The research on decision fatigue is not ambiguous. If the people making your most consequential calls are also the people who have been on site for eighteen hours, you have a structural problem. Rotate. Rest. Eat. It is not heroic to make critical safety decisions on four hours of sleep. It is dangerous.
Appoint a designated pessimist. Not a devil’s advocate, which research suggests is actually ineffective because the team knows the criticism is performed rather than genuine (Nemeth et al., 2001). A real person whose job it is to find the holes. Someone who is rewarded for identifying problems, not tolerated for raising them.
Debrief honestly. Then share it. The most valuable thing in any post-event review is not what went well. It’s what nearly didn’t. And the most valuable thing you can do with that information is share it with your peers, your competitors, your industry. Because the silence around near-misses is what allows the same mistakes to happen on someone else’s site next weekend.
Part Seven: The Bit Where I Admit The Ending
We drove home that Saturday night. The event was fine. Better than fine, actually. The audience had a great time. The artists were happy. The bar numbers were good. From the outside, looking in, it was a success.
And somewhere in the debrief the following week, someone said what someone always says: ‘We got lucky.’
Everyone nodded. And I let it go. Because it was easier to agree than to have the longer conversation.
But here’s the thing I’ve been thinking about since, and the thing I want to leave you with:
We didn’t get lucky. We got away with it. And those are not the same thing.
‘Getting lucky’ implies that the outcome was determined by chance. That we rolled the dice and they came up in our favour. That there was nothing we could have done differently.
‘Getting away with it’ implies that there were decisions we could have made earlier, conversations we could have had sooner, contingencies we could have built in advance, that would have meant the difference between a stressful Saturday morning and a controlled, calm execution of a plan we’d already prepared for.
The next time someone in your team says ‘we got lucky’, ask them which one they actually mean. Because the answer matters. One is a story you tell at the pub. The other is a warning you need to hear.
Sources
- Vaughan, D. (1996). The Challenger Launch Decision. University of Chicago Press
- Klein, G. (2007). Performing a Project Premortem. Harvard Business Review
- Veinott, Klein & Wiggins (2010). Evaluating the Effectiveness of the PreMortem Technique. ISCRAM
- Danziger, Levav & Avnaim-Pesso (2011). Extraneous Factors in Judicial Decisions. PNAS
- Kahneman, D. (2011). Thinking, Fast and Slow. FSG
- Nemeth et al. (2001). Devil’s Advocate Versus Authentic Dissent. EJSP
- Baumeister et al. (1998). Ego Depletion. JPSP
- Feynman, R.P. (1986). Appendix F, Rogers Commission Report
related reading
An Attempt at a UK Live Events Day-Rate Benchmarking Study
160 respondents. March 2026. A first cut at the day rates the UK live events industry actually pays — by role, by sector, with the gaps and limitations called out honestly.
Nobody Talks About What a Gig Actually Costs. I'm Going To.
You paid £87 for that ticket. A New York jury just decided you were overcharged for most of it. Here's where the money actually went, and why the industry won't tell you.