What does “broadcast-ready” really mean in 2026? This session connects three highly practical advances shaping modern media operations: a hands-on comparison of leading AI models against real newsroom and promo workflows, a deep dive on RIST as a secure, resilient backbone for internet-based contribution and distribution, and an inside look at ATSC 3.0’s Wake-up signaling for next-generation emergency alerting.
Together, these papers map the new baseline for efficiency, reliability, and public trust—on-air and behind the scenes.
Monday, April 20 | 11 – 11:20 a.m. | N256
Alex Gattari, Fabio Gattari
We’re preparing a rigorous, hands-on comparison of four flagship AI systems – Grok, GPT, Gemini, and DeepSeek – designed specifically around broadcast-industry realities. Every test maps to a day-to-day task you’d find in a newsroom, promo team, traffic, or compliance desk: long-form and short-form script generation for promos, continuity, and VO; fact-checking wire copy and guest claims against cited sources before air; violence-scene recognition across a large internal movie/series library to support compliance flags and scheduling decisions; and complex math used for ratings analysis, ad-inventory optimization, make-good calculations, and reconciliation of traffic logs. Each task family targets a different axis of “real usefulness”: cohesion and tone for scriptwriting, verifiability and auditability for editorial checks, safety precision for content understanding, and formal reasoning for finance/ops. We’ll present outcomes in a compact, decision-friendly table so technical and editorial leads can scan strengths, trade-offs, and operational considerations at a glance. We’re not here to crown a universal “champion.” Instead, the emphasis is fit-for-purpose: which models feel dependable for fast promo drafts, which are safer for compliance screening, and which are more robust for ratings math or reconciliation workflows. (Scores, examples, and tie-breakers will come later, this submission won’t pre-empt them.) Because deployment constraints shape success, we’ll examine on-premises and private-cloud options through a broadcast lens: air-gapped playout and standards compliance, latency implications for live workflows, GPU sizing and MLOps scaffolding, data residency and retention, and the practical pros/cons versus hosted APIs. Think of this as a field guide for CTOs and engineering heads balancing security, speed, and cost without sacrificing capability. To ground the end-to-end picture, we’ll also showcase a YouTube integration that assembles auto-generated Shorts from long-form titles in our library: scene selection aligned with brand guidelines, safety screening (including violence detection), script drafting, captions/VO, and packaging, demonstrating how orchestration turns model outputs into repeatable, on-brand deliverables. We’ll close with a real-world anecdote about an Australian radio station that quietly aired an AI presenter. An illustration of how fast audience expectations (and disclosure norms) are evolving. No verdicts here; just a prompt to think critically about trust and transparency as AI crosses from back-office helper to on-air personality. In short, this work is a practical map for broadcasters: what we tested, why it matters on-air and off-air, how to deploy it responsibly, and where automation can create immediate value.
Monday, April 20 | 11:20 – 11:40 a.m. | N256
Sergio Ammirata Ph.D.
The professional media industry – AV, events, corporate and digital out of home as well as broadcast – is set firmly on a path of IP connectivity, using the public internet to connect venues, destinations and production centers. The advantages of accessibility and scalability make it an obvious choice. But as more and more productions use remote connectivity, and the scope of these productions becomes more extensive – more cameras, sources and return feeds; multiple locations – so the demand for bandwidth grows and the challenges escalate, while the need remains for “broadcast standards” of five nines or better reliability. It is also vital to bear in mind that, as the use of tunnelled streams on the public internet for professional media grows, so it becomes ever-more attractive to those who seek to profit from cyber-crime. Ransomware is an existential threat for live television. It is also important to consider that state actors might want to interrupt signals for their own interests and to protect their own world view. The goal is to have a means by which circuits can be quickly and easily established, linking anywhere to anywhere via the internet. These circuits must provide the capability of high quality transmission within the bandwidth available, with minimal latency and high resilience to circuit disruptions. Finally, they must be hardened to any cyber attack. This paper will demonstrate that RIST is the only tunnelling technology at present available to deliver against these requirements. It will also show emerging technologies that facilitate the auto-registrations of these end points. It is fully interoperable with open and published standards, and it supports redundant routing, which together with SMPTE ST 2022-7 provides for seamless switching between alternate paths. The result is that RIST circuits can survive as much as 55% sustained and 86% short-term packet loss. It is codec agnostic, so users are free to choose based on their requirements, including the emerging high throughput JPEG 2000 (HTJ2K) and AV1 patent and royalty free codecs. It uses MPEG-TS as the transport stream, so can be implemented on any current architecture, while taking advantage of advanced features like null packet suppression and program selection to maximize transmission efficiency.. Automatic encoder bitrate management provides for dynamic adjustments based on realtime network characteristics. Finally, it is end-to-end encrypted to military standards, and can pass through intermediate nodes without decryption, eliminating points of risk. RIST meets all the demands of the modern media network.
Monday, April 20 | 11:40 a.m. – noon | N256
Jason Kim
The ATSC 3.0 Wake-up feature enables emergency alerts to activate compatible receivers to ensure critical information reaches audiences at risk. Building on next-generation capabilities such as enhanced video, immersive audio, and interactive services, ATSC 3.0 integrates Wake-up signaling through both the Bootstrap layer and the Advanced Emergency Alert Table (AEAT). This paper provides an in-depth examination of the Wake-up mechanism, its technical implementation across the physical and service layers, and its practical implications for broadcasters, device manufacturers, and public safety organizations.
Speakers
Rebecca HansonDirector-GeneralNorth American Broadcasters AssociationModeratorVIEW BIO
Alex GattariAI solutions architectEtereSpeakerVIEW BIO
Fabio GattariSales DirectorEtereSpeakerVIEW BIO
Jason KimSr. Systems EngineerONE Media TechnologiesSpeakerVIEW BIO
Sergio Ammirata Ph.D.Founder & Chief ScientistSipRadiusSpeakerVIEW BIOWork with NAB Show’s Sales Team to explore how your brand can power the pros shaping what’s next.