Timeouts Aren’t Safety Nets
They are canaries. If they go silent, the answer is not to ignore the silence, it is to make the air safe again.
When an application slows down, the reflex is often to reach for the timeout setting. Ten seconds, twenty, thirty. More rope. What you usually end up with is not more reliability, only more waiting.
Human-computer interaction research has circled this problem for decades. Robert Miller’s 1968 work suggested the thresholds: a tenth of a second feels instant, about a second is just about tolerable, and after that attention starts to fracture. Jakob Nielsen later popularised the same 0.1 / 1 / 10 second rule. Put simply, a two-second pause already feels sluggish. By five seconds most players assume something is broken. At ten seconds, they are gone.
So if your timeout is ten seconds, the player is lost before it even fires. Even two seconds is already testing their patience. A practical ceiling is nearer to two-thirds of a second. If you keep hitting that limit, the system is telling you to optimise or scale. Increasing the timeout only mutes the warning.
This is not just an engineering concern. Every extra second a player waits is a second they could be finishing a sign-up, completing a purchase, or simply choosing to do something else. Longer timeouts mean fewer completions, more abandonments, and weaker engagement. Speed is not vanity. It is trust, and it is revenue.
The harder design choice is not how long to wait but how to fail. Do you show cached results, offer a retry, or fall back to something simpler? A graceful “not right now” keeps a player’s trust in the product. An endless spinner does the opposite.
Timeouts should be tight, deliberate, and rarely adjusted. They are best treated as canaries in the coal mine. If they stop singing, the answer is not to build a bigger cage.