Watching the Watchers, Part 2: How Scripts Break Your Frame Budget
Why borrowed milliseconds cost more than they seem.

In Part 1 we explored how third-party scripts watch everything players do, often at the expense of performance. Pixels and tags that were once invisible are now part of the experience itself. The conclusion was simple: every watcher takes something from the player. This second part looks at the mechanics. How scripts consume the frame budget, why long-lived apps expose their weaknesses, and how tools like PartyTown, CAPI, and the Measurement Protocol can help.
A modern app should feel effortless. Routes change without reloads, content updates mid-scroll, menus expand the moment they are tapped. At 60 Hz the browser has just 16.6 ms to keep that illusion intact. That is the whole budget for rendering a frame, handling input, and running any scripts in flight.
When the budget is blown during an animation, the result is a visible stutter. When it is blown during an interaction, the screen simply pauses before responding. Both break the spell of immediacy. Into that narrow window we invite analytics, monitoring, and marketing scripts. They hook events, patch APIs, and queue work. Each promises to run politely. On slower devices, politeness does not exist.
What the profiler shows and what it misses
Performance profiles will happily attribute time to tags. A tag that consumes 400 ms of a two-second journey looks like 20 percent overhead. That sounds tolerable. What the profiler does not reveal:
- Framework churn: a callback can trigger change detection across the tree, but the cost is shown as “app work.”
- Leaking listeners: an observer attached to a DOM element can survive navigation. In a long-lived app the heap grows and the app slows.
- Duplicate initialisation: SDKs that re-init on every route change, rebinding handlers and wasting cycles.
The visible overhead is bad enough. The hidden overhead is worse.
The myth of “no impact”
Vendors often insist their scripts have “no impact,” “no perceivable impact,” or that they are “lightweight and quick to download.” That logic stops at the network. A script may be small in bytes but large in consequences. Once loaded it attaches listeners, observes mutations, patches APIs, and queues work. These costs scale with every interaction.
The truth is straightforward. If a script genuinely has no impact on performance, it is unlikely to be doing meaningful work. If it is doing meaningful work, it must consume resources, and those resources come from the same 16.6 ms frame budget as rendering and input. A claim of “no impact” is usually a contradiction: either the script adds overhead, or it is not delivering what is being sold.
Two worlds: with and without isolation
PartyTown is a small runtime from Builder.io that changes where scripts execute. Instead of running in the main thread alongside rendering and input, compatible scripts are moved into a web worker. That shift matters because work done in a worker cannot stall a frame. Analytics beacons, network requests, and marketing logic all run off the main thread, leaving rendering and input responsive.
It is not a silver bullet. Scripts that depend on the DOM still run in the main thread. Session replay, element visibility tracking, and other observers remain where they are. But for many analytics and marketing tags, isolation removes much of the cost.
In practice, audits show the difference is stark. Journeys that once took two to four seconds dropped to half or a third of that with PartyTown enabled. Even heavy transactional flows could complete in under half a second.
Replay and monitoring
It comes back to a point from the first article: these tools were built for a world of page loads. They expect a fresh reload to tear everything down. In a long-lived app, there is no such reset. Listeners survive navigation, observers persist, and references linger in memory.
Two tools, one DOM, twice the cost
Session replay has been measured adding 40 percent overhead on some flows. That makes sense: complicated journeys are complicated to record. Monitoring tools are lighter, but they watch many of the same surfaces such as performance observers, error listeners, scroll events, and mutations.
When the two come from different vendors, the overlap doubles. Each attaches its own listeners, its own observers, and its own serialisers. A single scroll is captured twice, a mutation dispatched through two callbacks, and a network error patched in two layers. Even if replay is sampled at one percent, the duplicated instrumentation remains for every session.
It is not just time. Two tools mean two sets of references held in memory. Listeners bound to elements that no longer exist leak twice as fast. In a long-lived app that compounds into real heap pressure and slower sessions.
If both features live in the same tool the story is a little better. The listeners exist once and the vendor can coordinate. But across different tools there is no coordination. The overhead compounds, the leaks multiply, and the frame budget vanishes.
The right approach is mutual exclusion. Record one percent of sessions in detail, observe twenty percent for timings. Never run both together, and certainly not from separate vendors.
Analytics without the baggage
Campaign optimisation lives and dies on tracking. Without reliable conversions, ad platforms cannot optimise bids or audiences, and marketing spend leaks away. This is why every platform has built a “server-side” channel:
- Conversion APIs (CAPI) from social platforms
- Google Analytics Measurement Protocol (MP)
Both replace the familiar browser tag with a direct HTTP endpoint. Instead of the client firing a beacon from JavaScript, the event is packaged and sent from infrastructure. The pitch is clean: lighter client, better privacy, resilient delivery.
The reality is subtler.
Measurement Protocol in practice
Measurement Protocol is not magic server-side analytics. In a long-lived app, the client still has to decide what happened such as a route change, a purchase, or a form submission. The difference is only transport: instead of GA’s script attaching listeners, the app or a backend proxy sends a POST to GA’s endpoint. The overhead of the script is gone, but the need for deliberate instrumentation remains. What you do not explicitly send, GA will not see.
Server-side GA containers take this a step further. The app emits events, a proxy enriches them with campaign or cohort metadata, and forwards on. This consolidates logic and reduces duplication, but it is still fundamentally client-signalled.
CAPI in practice
CAPIs solve the same problem for social platforms. Instead of firing a pixel in the DOM, a backend call transmits the conversion. The advantages are similar: no DOM overhead, no reliance on third-party cookies, and fewer points of failure. For campaign optimisation, this preserves accuracy while lifting load from the main thread.
Where the line sits
- Campaign optimisation needs high-fidelity signals. CAPI and MP provide them, but only if we instrument cleanly.
- Product analytics is less brittle. Sampling, RUM, and partial replay give enough direction without trying to replicate every GA event.
The rule is simple: treat campaign optimisation as first-class, instrument deliberately, and move it off the client. Treat product analytics as directional, sample heavily, and accept imperfection.
Instrumentation is one form of overhead. Experimentation is another. Both ask the app to carry work not directly tied to the journey, and both benefit from being designed into the codebase rather than layered on afterward.
Two styles of experimentation
Experiments divide into two styles.
- Script-injection tools overlay content after render. They flicker, delay, and attach listeners that often linger beyond their lifetime. In a long-lived app, that means memory leaks. Because they live outside the codebase, they bypass TypeScript and its guardrails. Flags and variants are strings, not checked values. The result is more fragile lifecycle, unreliable teardown, and weakened state integrity.
- Embedded SDKs treat flags as part of the app itself. Variants are declared in TypeScript, integrated into components, and torn down with them. In a long-lived app, this style is safer. Lifecycle is respected, teardown is consistent, and state integrity is preserved.
The first style corrodes these foundations. The second style reinforces them.
When scripts break more than speed
The risk is not only stutter. Monitoring scripts can break apps outright:
- CORS and preflight: if headers are misconfigured or proxies strict, failed requests can throw uncaught errors that bubble into the global handler, destabilising bootstrap
- Monkey-patching: replay tools patch
fetch
,XMLHttpRequest
,addEventListener
, andMutationObserver
. Collisions or recursion can freeze navigation or halt bootstrap - CSP and ad-block: blocked
connect-src
or filter rules can leave libraries half-initialised. Some fail noisily - Resource pressure: replay buffers DOM snapshots in memory. On constrained devices this can trigger reloads or crashes
These are not exotic bugs but ordinary misconfigurations amplified by fragile hooks.
Security surface: Every external script is another execution context with access to the same DOM and APIs as the app itself. Even if the vendor is trusted, the dependency chain may not be. A compromised tag or CDN can exfiltrate data as easily as it can measure performance. Guardrails must cover security as much as stability.
WebView makes it worse
All of these problems are amplified inside Android WebView. Chrome on Android runs on V8 with its full set of optimisations. WebView lags behind and often runs with restricted JIT for security reasons. The same script can take twice as long. Garbage collection is less aggressive, so detached DOM nodes hang around longer. Long tasks in JavaScript block not only rendering but also native-side gestures.
What feels sluggish in Chrome feels worse in WebView. A fragile long-lived app becomes more fragile still when squeezed into a weaker runtime.
Guardrails
If scripts must run, they need guardrails:
- Never run replay and monitoring simultaneously
- Enforce sampling: one percent for replay, twenty percent for monitoring
- Wrap initialisation in try/catch and defer until after first meaningful paint
- Use canaries and kill switches so vendors can be turned off instantly
- Harden CORS and CSP rules
- Canary releases at one percent, with CI tests that load the app alongside vendor snippets and run core journeys
Conclusion
Client-side scripts are not free. They take time, trigger framework churn, leak memory, and collide in ways profiles rarely show. PartyTown helps by isolating some into workers. Conversion APIs and the Measurement Protocol help by moving signals off the client. Sampling and exclusion help by limiting what remains.
The rule of thumb is simple. Slow journeys get slower, fast ones do not get faster. The player feels it most in the moments that matter, such as checkout, search, or a menu open on a low-end device. And when the same app is forced through WebView without V8’s optimisations, the cost is amplified again.
These tools were built for a world of page loads. Long-lived apps accumulate listeners and never reset. WebView tightens the constraint even further. That mismatch is why memory leaks multiply, why duplicate hooks pile up, and why frame budgets are missed.
The fix is not to abandon measurement, but to re-architect it. Campaign optimisation deserves high-fidelity signals: instrument deliberately, send them through APIs and proxies, not pixels. Product analytics deserves restraint: sample, monitor, record sparingly. Experiments belong in the codebase, not injected afterward. And everything that remains must run with guardrails and kill switches.
Scripts should serve the player, not the other way round.
If Part 1 showed that every watcher takes something from the player, Part 2 shows exactly how they do it.