Posted on Tuesday, July 11, 2023

The Ins and Outs of College Sports

By Chris Scheck, Head of Marketing Content, Lawo

Just to be on the safe side: this blog is not about organizing college sports events, or how they work. I want to look at how to get signals into, and out of, “the system” and what to pay attention to.

Most operators understandably want their content to look and sound as compelling as tier-one sport coverage, on a usually smaller budget. Except for some high-tech AR and graphics effects, broadcast-grade equipment is no longer required to get there.

Yet, there is an important decision operators need to make: should they remain within the realm of one format — or even one manufacturer — or should the infrastructure be open to a variety of existing and new developments?

STAGEBOXES Those in it for the long run will likely look at the second alternative and seek to purchase tools that allow them to remain agile through mixing and matching protocols. Assuming that a networked solution is required, the first stage to look into are gateways. This is a learned term for I/O stageboxes, i.e. the devices to which you connect the cameras, on the one hand, and the microphones, on the other. Stageboxes with SDI inputs, like Lawo’s .edge, receive both audio and video signals, while audio stageboxes can only be used for MIC/Line and/or digital audio sources. While some audio stageboxes only support one audio format (analog, digital or MADI), other models handle all of these formats and usually provide in excess of 40 inputs and outputs.

In a networked environment, the stageboxes are connected to a switch, which distributes the data streams on the network. Audio and video as well as audio-only stageboxes that support wide-area transport are able to transmit their streams to locations beyond the 330 feet range usually supported by SDI and copper wires. Taken to the — quite common — extreme, the video and audio signals can be ingested on-location and switched and mixed at the production hub several hundred, or even thousands, of miles away.

THE ESSENCE The stageboxes referred to above convert the incoming audio and/or video signals to so-called essences, in line with the SMPTE ST 2110 specifications. The most important essence types are video, audio, control and ancillary data. An incoming SDI feed, for instance, is separated into video essences that only carry the footage, audio essences for — you guessed it — audio, control essences and metadata/ancillary essences. Within the ST 2110 sphere, these essence types lead separate lives, making it easy to combine a camera’s footage from the audio captured with its on-board microphone, and to add the stadium sound captured with different microphones instead. This is called audio shuffling.

Splitting the incoming signals by turning them into essences has the advantage that the mixing console or software in the audio control room only receives audio streams, while the video switcher only requires video essences. Not sending video essences to an audio device is a good way of avoiding bandwidth overload. Video essences indeed require a much higher bandwidth than audio essences. I’m saying this because IP switches have a finite bandwidth, i.e. they can only transport a given amount of data at any one time. Overloads typically lead to slower performance or even data packets being dropped.

The processed streams provided by the video and audio control rooms may need to be transmitted to other stageboxes, to accommodate video screens used for multiviewing that are not IP native, and loudspeakers in the audio control room, unless the console’s built-in analog outputs are used. In any case, the stageboxes used at this stage mainly serve as outputs, providing connectivity to other devices.

EFFICIENCY Another strategy involves working with audio mixing consoles equipped with the bare minimum of analog inputs and outputs. In this case, the separation is twofold: the stageboxes that receive the audio convert the signals to streams, which are transmitted to the DSP processing core. The core is a separate device and executes the commands sent by the console’s controls. The console is, in effect, a highly sophisticated remote control. Strictly speaking, it could be replaced with a software GUI for less demanding mixing assignments.

Separating the A__UHD Core from the mc² console offers the benefit that the core can be solicited by several virtual and physical mixers simultaneously. Under this scenario, one 1U unit with up to 1024 DSP channels can provide eight mixers with 128 DSP channels each. Any other channel combination in multiples of 32 is also possible (e.g., 32, 64, 96, etc., and more than 128 channels for the main console). Each “mixer slice” comes with its own routing matrix and mixing console peripherals, and is operationally completely independent.

THE BIG PICTURE Working in ST 2110 turns the entire network into the backbone. Bottlenecks regarding router inputs and outputs are no longer an issue, while the infrastructure can be scattered over the production hub and even allows for remote and distributed production. Adding more processing power — and pooling the existing processing power among several users — is much more flexible than in a router-based setup.

Whether the future will be cloud-based is up to the user. Some like the approach, while others worry about security breaches and potential content theft or channel hijacking.

This explains why Lawo’s HOME Apps can run on standard servers on-prem, in a datacenter or in the public cloud. It’s up to you. The HOME Apps accept any protocol you throw at them, be it ST 2110, JPEG XS, NDI, SRT, H.265 or H.264. They are flexible enough to easily accommodate future protocols and formats as they become relevant.

Another development will be edge processing, i.e., the ability to process/convert signals immediately after the stagebox input. Why is this clever? Because if a lower-res rendition is enough, less bandwidth will be consumed, and if the processing result is only needed in one specific place, not putting it on the network frees up even more bandwidth.

A rapidly growing number of broadcasters, colleges and universities have embraced the approaches mentioned above. They provide the agility operators need to produce high-quality content and to remain at the top of their game.