Live production is evolving fast—moving from SDI to IP, from manual control to intelligent orchestration, and from “video-only” to richer, real-time data and accessibility layers. This session explores next-generation broadcast architectures that synchronize video, audio, and metadata at ultra-low latency, automate live workflows through real-time orchestration, and examine how AI is reshaping audio description for audiences who are blind or have low vision. Together, these presentations reveal how smarter systems can improve speed, reliability, efficiency, and audience experience—while raising critical questions about where automation helps and where human expertise remains essential.
Speakers
James BloomfieldCTOMNC SoftwareSpeakerVIEW BIO
Sebastian FrankeResearch AssociateAnhalt University of Applied SciencesSpeakerVIEW BIO
Matthias SchnöllProfessor, Department of Media TechnologyAnhalt University of Applied SciencesSpeakerVIEW BIO
Joel SnyderPresident / Founding Director EmeritusAudio Description Associates, LLC / Audio Description Project of the American Council of the BlindSpeakerVIEW BIOWork with NAB Show’s Sales Team to explore how your brand can power the pros shaping what’s next.