Sound and The M.E.T. Effect℠
The art of digital audio.
The ways in which we experience sound, listen to music, and how it is recorded and tagged with metadata has changed dramatically in just the past few years.
The devices used to capture and deliver sound have been transformed. Not the speakers, mics and head phones, but the technologies supporting them and the growing applications we have for them are changing the shapes and sizes of sound equipment. Wires have disappeared with evolving bluetooth capabilities and the work to reduce distortion and produce superior sound quality is ongoing.
Join the Sound Engineers, Videographers, SFX Specialists, Station Engineers, System Integrators and allied professionals expanding the use and improving the quality of sound in our world.
From the Floor to the Session Rooms, explore this year's offerings below.
The Inter-SDO Group, an informal aggregation of standards development organizations (SDOs) and related industry forums around the world, opens Global TV Tech Day with a look at the key emerging television technologies. The presentation will begin with a keynote on "How to move a broadcast organization into IP and the Cloud," followed by panel discussions on IP Standards for Professional Facilities (including representatives from SMPTE, JTNM and AIMS) and Enabling Personalized and Immersive Content (including a VR/AR/MR standards update, the DVB initiative on VR, Next-generation Audio (NGA), and a producer's point of view).Learn More
ATSC A/85 has successfully reduced level-jumps between TV programs, and prevented audio quality from eroding through the kind of "loudness war" notorius in music production and radio. With much content now also being used for streaming and on social media, monitoring practice to ensure a good listening-experience for any type of audience is proposed. Special attention is devoted to not producing for a lowest common denominator, yet mainting good speech intelligibility across platforms. Monitoring based on loudspeakers and headphones is discussed, and details from new research on spectral calibration and level calibration of the listening environment are prodived. Finally, from a physiological point of view, the three major components of listener fatigue in broadcast and post production are described and rated.Learn More
We propose a new technology for broadcasting that helps visually impaired people enjoy televised sports programs. On the commentaries of TV sports programs, visually obvious incidents in an event are often left unmentioned, which makes it difficult to understand what is going on if the audience is unable to see. Our solution is to generate auxiliary audio description automatically from metadata obtained in real time for various sports events. This is helpful not only for the visually impaired but for all people who cannot continuously watch TV screens. We designed an experimental system that automatically generates audio description for Olympic / Paralympic programs from official metadata named the Olympic Data Feed (ODF). When the system receives an ODF message, it composes a new explanation text suitable for the situation and then conveys it vocally with a speech synthesizer. We ran our system for the Rio Olympic and Paralympic programs and successfully provided both caption and audio descriptions for over 2,000 games.Learn More
In this class, mobile journalists will learn how to invoke a live stream from their mobile devices. We'll explore options from social networks such as YouTube, Twitter and Facebook as well as robust platforms like Periscope, LiveStream, and Ustream. Creators will also learn how to tap into the DJIGO app for streaming from drones and the Oslo camera system. We'll also tackle hardware suggestions to improve video and audio quality.Learn More
This panel of experts will explore the latest in audio equipment and processes currently used, as well as a look into the future developments in audio for VR/AR. Topics to be covered include audio capture and production for live, cinematic and game VR/AR projects.Learn More
Have you ever wondered how to get more out of your voice over talent? Learn tips and tricks on how to direct talent for spots, short form and long format programming. Whether coaching a seasoned star or the guy next door with a good voice this class will help you learn how to get the read you want. We will also dive into best practices and work flow for recording voice.Learn More
Dive Deep into the world of Audio on your NLE. This hands on class will be taught in Premiere Pro, though the concepts will follow to other NLEs. From Sweetening to Sound Design the art of audio will be explored with hands on exercises. Areas of concentration will include EQ, Compression, Work Flow, Master Compression, and Noise Reduction. Also, expect to fine tune your sound editing and design in this all day class.Learn More
Join us as we walk through the steps for audio production and finishing in VR and 360 Video environments. Learn best practices for recording in the field and finishing in your DAWs. Tools demonstrated will be Facebook360, YouTube, Reaper, Pro Tools, and Dolby VR. Work Flow in and out of your NLE will be discussed. Middle Ware tools will be saved for a future class.Learn More
As video facilities migrate from SDI embedded audio to an increasing number of IP based standards such as AES67, maintaining loudness compliance remains as important as ever. This paper will explore approaches for maintaining consistent loudness compliance and dealing with audio management challenges in facilities making the transition between SDI and AES67 infrastructureLearn More
Stereo audio in cinema was demonstrated in the mid-1930's and a variety of stereo and multi-channel formats were utilized beginning in the 1950's, although most movies were released with a mono soundtrack. Dolby Stereo debuted in 1975, followed 17 years later by Dolby Digital 5.1. When Digital Cinema launched in the early 2000's it utilized a discrete 5.1 mix. Several years later 7.1 was added as a format, and in 2012 Dolby Atmos immersive audio began to be deployed in cinemas around the world. Barco AuroMax and DTS:X have entered the market as alternative immersive audio solutions. How do audio mixers create content that serves this diverse exhibition environment and what does the future hold?Learn More
The world of music licensing is complex, and recent developments only add to the uncertainty. The Department of Justice has concluded its review of the BMI and ASCAP consent decrees-leading BMI to ask a federal court to allow so-called "fractional licensing." An aggressive new Performing Rights Organization (PRO), Global Music Rights, made waves by offering interim licenses for its catalog of songs while suing and being sued by the Radio Music Licensing Committee. Congress is considering whether to create a meaningful public licensing database. Recording artists and record labels continue to push for an over-the-air performance right under federal law and payments from broadcasters for pre-1972 sound recordings under State laws. Come find out how to make sense of these developments and what more may be in store.Learn More
Everything you didn't know you didn't know about audio in Premiere. Jarle will show the easy way to deal with multi-channel audio, setting up your tracks for stereo output and 8-channel archives. Learn how presets for source track selections and audio track heights can speed up your audio editing.
WHAT SOFTWARE, PRODUCT OR TECHNOLOGIES WILL BE USED IN THIS SESSION?
Adobe Premiere Pro
WHAT CONCRETE LESSONS OR SKILLS WILL ATTENDEES TAKE AWAY FROM THIS SESSION?
Work with 5.1 material
Mix and finish with EQ, compression and loudness control
Understand audio preferences
Export a multi-channel audio master
Export multi-channel archive files
WHICH PROFESSIONALS WILL BENEFIT MOST FROM ATTENDING YOUR SESSION?
On August 5, 2016, NBC, Comcast and Dolby Laboratories partnered to bring the Rio 2016 Olympics Opening Ceremony in Dolby Atmos and 4K-HDR and wide color gamut video to VIP viewing parties in the United States. Overall, this effort was several months in the making and required the dedicated support from engineers within NBC/Comcast as well as direct support from Dolby Laboratories and other outside vendors. This case study will focus on the effort to plan, design, build and execute the Dolby Atmos immersive audio planning, design, building and execution of this historic broadcast.Learn More
In this session, delivered by Rippletraining founder, Steve Martin, you will explore the sound editing and sweetening capabilities of Final Cut Pro X You'll learn how to use Magnetic Timeline 2 to quickly trim dialogue and how to improve the story pacing. You'll learn some incredibly fast workflows for adjusting the volume of your clips and how to create cross fades with only a few keystrokes.
You'll then dive into Final Cut Pro's powerful organization paradigm called Roles for keeping your soundtrack organized by lanes and learn how to create and apply custom subroles to your dialogue, music and effects.
You'll learn important tools for improving the quality of your dialogue using Final Cut Pro's audio enhancement tools and how to work with EQs and Compressors to selectively control the volume of specific frequencies to make your talent's voices stand out in the mix.
And speaking of mixing, you'll learn best practices for applying effects and how your choices affect the audio signal routing of your clips. Finally, you'll learn how to prepare you mix for final delivery using Roles to create separate stems so you can deliver your mix for handoff to a network, film festival or client.
- Enhance & Improve Dialogue
- Organize & Edit Using Subroles
- Work with EQs and Compressors
- Deliver D, M and E Stems (Roles)
Audio and video over IP have been in use for quite some time now in media contribution and distribution. While discrete digital signal transport (i.e. SDI and AES/MADI) remained the most commonly used methods for media signal transport within production and broadcast facilities, recent technology developments have enabled IT- and IP-based transport methods to gain even greater traction in this last bastion of tradition. Initial technology bridgeheads pushed by individual company efforts, usually based on a blend of technology standards and proprietary seasoning, are now followed by an industry-wide consolidation. Industry alliances like AIMS, AMWA, MNA and VSF are bringing together individual technology achievements to form condensed, best-of-breed concepts based on existing broadcast workflows and proven IT standards. And standards organizations like AES and SMPTE are working hard on defining future-proof interoperability standards based on these concepts. If underlying technology acronyms like IP, UDP, RTP, PTP, SDP, SIP, SAP, SDN sound somehow familiar to you, and you have heard about industry alliances like AIMS, AMWA, MNA, VSF, JT-NM and the concepts they are promoting, such as TR03/04, AES67, NMOS, but are not really sure how all this relates to each other and to the work AES, SMPTE, IEEE and IETF are currently conducting, this session may be for you.Learn More
Not only did Industrial Light & Magic create the visual effects for "Rogue One: A Star Wars Story," its chief creative officer, John Knoll, developed the film's concept and original story. Rogue One features more than 1700 visual effects shots, including a fleet of new starships, a digitally recreated central character (Grand Moff Tarkin), and a quintessential, third act space battle. In addition to the new spaceships, there are fresh weapons and Droids for which Skywalker Sound designed distinctive, original sounds. Learn how ILM and Skywalker Sound balanced the new with the classic through such technical innovations as virtual production, their proprietary "Flux" software, an innovative production pipeline and more. See some great behind the scenes footage and glimpse how sound and visual effects worked together to produce another box-office hit for Lucasfilm.
Produced in partnership with Motion Picture Sound Editors (MPSE)Learn More
Since 1985, significant research has focused on listeners' ratings and preferences of loudspeakers based on sound quality. The same standard used to measure end-user preference can be adopted for professional and broadcast monitors, ensuring 100% compatibility between the monitors used for production and reproduction of audio content. In this way, consumers would finally hear what the artist or broadcast producer intended. In the past five years, similar research has evaluated headphone sound quality. The commonality between the headphone and loudspeaker target curve means that broadcast engineers and producers can either use loudspeakers or headphones to evaluate audio content and come to similar conclusions. In this presentation, Sean will summarize the current best practices for measuring the sound quality of loudspeakers and headphones based on recent scientific research. Attend this session to learn the science behind sound quality, so that you can select loudspeakers and headphones that will bring your productions to lifeLearn More
Hear from experts at the forefront of deployment as Ultra HD television broadcasting takes off around the world. The session will include leaders from various sectors of the TV ecosystem, presenting their recommendations on managing transition to the next generation of audio and video content creation and delivery.
Subjects include lessons from the interoperability experiments on adding high dynamic range (HDR) to 4K services, the roadmap for evolving UHD consumer displays, and experience from early deployments of UHD over ATSC 3.0.Learn More
Audio at NAB Show
The M.E.T. Effect has enabled new distribution opportunities and unprecedented improvements in sound quality of content in a very short period of time. Experience first-hand how this phenomenon is evolving how audiences consume audio media and entertainment.