NAB Show

NAB Show

Session.

Media Infrastructure, Compute and Network Control: Enabling Cloud MCR and Real-Time Workflows

Sunday, April 19 | 9:30 – 10:30 a.m. | N256

Broadcast Engineering and IT (BEIT) ConferenceAdd to MY Show Planner

As broadcast workflows shift toward cloud-based MCR, virtualized production, and AI-driven processing, infrastructure performance and control have become critical to real-time operations. This session examines how storage protocols, compute architectures, and network control mechanisms directly impact throughput, latency, resiliency, and workflow efficiency. Papers explore file-sharing protocol performance as a hidden bottleneck in high-bandwidth media environments, dynamic network prioritization using open APIs to protect critical video flows, and the readiness of compressed-domain, cloud-based MCR for prime-time broadcast. Together, these presentations provide practical, engineering-focused guidance for designing scalable, resilient broadcast infrastructure capable of supporting real-time, cloud-enabled workflows.

Subsessions

  • Rethinking File-Sharing Infrastructure for Modern Broadcast Facilities: Addressing the SMB Protocol Bottleneck

    Sunday, April 19 | 9:30 – 9:50 a.m. | N256

    Duncan Beattie

    As broadcast facilities transition to 4K/8K production, cloud-hybrid workflows, and AI-enhanced post-production, file-sharing infrastructure has emerged as an unexpected bottleneck. Whilst the industry invests heavily in storage hardware and networking equipment, the protocols enabling file access often remain overlooked until they limit production efficiency. Modern broadcast operations demand unprecedented data throughput. Virtual production requires real-time rendering with immediate access to multiple asset versions. Post-production teams transfer massive files for editing, color correction, and VFX work. Live operations require rapid ingest and distribution. Yet whilst facilities deploy cutting-edge storage and 100GbE networking, traditional file-sharing implementations often operate at a fraction of available bandwidth – creating productivity constraints that slow collaborative workflows. We'll examine performance characteristics impacting broadcast operations: multi-threaded architecture requirements, RDMA (Remote Direct Memory Access) implementation differences, SMB compression impact on uncompressed formats, and multichannel performance for bandwidth aggregation. Using anonymized performance data from media production environments, the presentation demonstrates measured throughput differences between implementations, examining impact on collaborative editing, render operations, and archive access. Practical guidance addresses protocol performance evaluation during storage procurement, integration for IP-based facilities and virtualized environments, container deployments, and cloud-hybrid models. The paper will examine how protocol optimization integrates with existing infrastructure investments, drawing on implementation patterns across industries handling similar data challenges. The speaker brings unique expertise as former Microsoft architect who designed the SMB protocol, now applying this knowledge to enterprise storage challenges across broadcast, medical, and high-performance computing. This combination provides insights bridging theoretical design and practical facility deployment, particularly relevant as facilities plan infrastructure supporting AI-enhanced workflows, virtual production, and distributed operations.

  • Real-Time Device Prioritisation Using Network APIs

    Sunday, April 19 | 9:50 – 10:10 a.m. | N256

    Sam Yoffe

    All network links are subject to limitations on the amount of traffic that they can support, whether this is a 100 Gb/s fibre link, a 12G SDI cable, or Wi-Fi connection. Wireless links typically have lower capacity as they require the use of a range of radio frequency (RF) resources to transport the information. Radio spectrum is a finite resource, and as wireless technologies have evolved, the need for access to more spectrum has pushed services towards progressively higher frequencies (such as the 26 GHz mmWave band) where there is more unused spectrum available but with reduced coverage range. Networks based on 5GNR technology (typically deployed around 4 GHz) can provide single-layer uplink throughputs around 1–1.5 b/Hz for downlink-biased configurations (as used by public network operators). Private networks that can be configured to provide uplink-biased connectivity can increase this uplink performance to over 4 b/Hz, but these are still potentially resource-constrained links (particularly in smaller channels). One attractive feature of private 5G networks or slices on a public network is the reduction in complexity offered by the ability to share the same network infrastructure and resources between numerous user devices and services as and when they are needed. However, this can lead to congestion, where demand for network resources cannot be satisfied by those available. In this case, increases to latency and jitter, or even packet loss, can occur and introduce unacceptable artifacts or frame drops. The 5G standard provides an advanced scheduler to allocate resources among devices, and a plethora of ways to manage quality of service (QoS). These can be readily established for static priority needs, but a major challenge is adapting to dynamic prioritization requirements, which has become known as quality-on-demand (QoD). This paper discusses the implementation and use of network APIs to provide dynamic control of network prioritization, and its relevance and importance for the broadcast industry to protect key video feeds or critical links. The emerging requirement for implementing open network APIs to maximise vendor and operator compatibility is also addressed.

  • Is Cloud Based MCR Ready for Prime-Time?

    Sunday, April 19 | 10:10 – 10:30 a.m. | N256

    David Edwards

    Master Control Room (MCR) operations form the cornerstone of live broadcast workflows – Guaranteeing seamless content ingress, signal monitoring, format manipulation and stream egress to internal or external takers. Historically, these functions have relied on on-premise hardware operating predominantly in the uncompressed domain and undergoing an evolution from SDI connectivity to IP-based SMPTE ST 2110 networking. For some, this transition has proved a complex and resource intensive operation. As the broadcast industry shifts towards all-IP environments, hardware-centric solutions may look increasingly misaligned with the dynamic demands of modern broadcasting, where capacity must scale rapidly to accommodate fluctuating live event schedules and diverse egress requirements for national or international distribution. This paper explores the viability of software-defined and cloud-based technologies to address these challenges, adopting a "Compressed Domain – First" philosophy to optimize cost, network bandwidth, and processing efficiency. Following a typical MCR workflow, this paper will detail how content can be ingested as IP transport streams, cleaned, validated and conditioned to meet house standards. Key innovations include enhanced stream resiliency through visually seamless switching between coherent or non-coherent live compressed streams. Applications for failover between parallel video sources or processing instances are examined, demonstrating how critical failures or maintenance windows can be instantaneously managed with visually hitless transitions to alternative paths. Building on this foundation, this paper investigates how it is possible to streamline MCR efficiencies through provision of scheduled and on-demand program content insertion to fulfil obligations for diverse takers via slate, blackout or show-reel integration – all within the compressed domain. While compressed-domain processing offers significant advantages, uncompressed video operations remain essential for certain tasks. The paper evaluates low-latency interchanges between compressed and uncompressed domains, showcasing software-based processing at scale for functions like graphics overlay, video format conversion, and fully motion-compensated frame-rate conversion. Through use-case examples, technical analysis of processing load and detail of transport stream processing techniques, this paper heralds the potential to simplify complex MCR infrastructure, maintain video quality, deliver lower cost implementations and scale capacity on-demand that align to show that a “Compressed Domain – First”, Cloud-based MCR is now ready for Prime-Time.

Interested in Sponsorship Opportunities?