The redundancy stack that keeps a hybrid keynote on-air when one signal path goes down.
Most "hybrid" broadcasts have exactly one point of failure, the venue internet drop the on-site team got pointed at twenty minutes before doors. If your keynote is going to a board, a press pool, or 40,000 sales reps inside the firewall, that's not an engineering plan. Here's the stack we build so the show stays up when the first path goes down.
If your keynote is being watched by a board, a press pool, or 40k remote sales reps, "we'll plug into the venue wi-fi" is not an engineering plan it's a prayer with a router attached. Here's the stack we run for events where a broadcast drop is a resume-updating event.
Layer 1. Two wired uplinks. Different paths. Audit this on the site survey.
Primary uplink is a dedicated venue circuit with a written SLA, not a "we usually have good wi-fi" assurance from the venue ops manager. Secondary uplink is a second circuit that does not share a meet-me room, a demarcation panel, or even a building entry with the primary. If a backhoe takes out the street, one path stays up. We've watched it happen. We walk both paths during advance, and if the secondary is sharing conduit with the primary forty feet from the loading dock, the redundancy is theatre.
Layer 2. Bonded cellular as third path
Peplink bonded-cellular routers are standard on every hybrid broadcast we run, four SIMs across three carriers, one of them a regional provider with separate tower routing. We've run that exact stack across fourteen Gulf Coast terminals with zero downtime, and on the engagements where a primary circuit dropped mid-show, the cutover landed inside the cellular failover window with no visible disruption.
The obvious next question: what if all three carriers drop at the same time? In a metro venue, that's a fire-marshal problem, not a broadcast problem. In a remote venue, we add a fourth path. Starlink as a tertiary, prestaged and tested in advance, never as a primary.
Layer 3. Dual encoders. Dual destinations. SRT, not RTMP.
Two hardware encoders, each pushing the same program feed to your primary streaming destination and to a hot backup. The encoders sit on different uplinks, different power phases, and ideally different physical racks, because if a PDU trips, we don't want to lose both encoders to the same incident.
Protocol choice is not a religious debate, it's a packet-loss tolerance debate. RTMP starts visibly degrading in the 2–4% loss range and falls over past 5–7%. SRT holds clean to roughly 10–15% and degrades gracefully past that. On a venue circuit at 9 a.m. on show day, that delta is the difference between a clean keynote and a buffering icon. We run SRT as the default and only fall back to RTMP if the destination platform refuses to ingest it.
Layer 4. A hot backup director's feed
A second director on a mirror switcher (we call the chair TD2), patched into the same audio bus, ready to cut to air inside three to five seconds if the primary switcher goes down. Audience sees a transition. Producer sees a heart attack. The TD2 chair is also where we run the eighth-language feed when the show flow needs it, same crew add, not a checkbox.
Layer 5. Platform redundancy without platform-shopping
Our default destination for board-level and embargoed keynotes is Vimeo Events, private streaming, embargo controls, enterprise SSO. For a board read-out, we want the keynote landing inside the firewall before it lands outside it, and Vimeo is the platform that gets that right.
That said: every platform has had a bad afternoon. Across the last five years we've watched YouTube Live, two enterprise platforms, and even Vimeo on rare occasions push unexpected downtime, which is why even our default has a documented secondary, the encoder pre-configured to push to it, and an internal-comms script ready to go out the moment we cut over. If your stream needs to land somewhere else inside sixty seconds, the difference between a tested fallback and a "we'll figure it out" is the entire game.
The honest target
With this stack, we target 99.99% delivered-uptime, measured on stream-received at the audience endpoint, not stream-sent at the encoder. We miss it when we miss layers. The number isn't magic. The crew add behind the number is.
The discipline here is the same discipline we apply to crew sizing: redundancy is not a checkbox. Every layer needs an owner, a tested failover, and someone who has rehearsed the cutover before show day. Treat the third path as the third crew add, not a line item on the gear list.
Walk your next hybrid keynote through this stack with us.
"Hybrid broadcast redundancy" is one of the four working sessions we run from the /resources hub. Ninety minutes, your show flow on the table, our broadcast team on the call.
Book a working session→