April 22 - 27, 2017 | Exhibits April 24 - 27 Las Vegas Convention Center

Sound

Sound

Sound

Sound and The M.E.T. Effect℠

The art of digital audio.

The ways in which we experience sound, listen to music, and how it is recorded and tagged with metadata has changed dramatically in just the past few years.

The devices used to capture and deliver sound have been transformed. Not the speakers, mics and head phones, but the technologies supporting them and the growing applications we have for them are changing the shapes and sizes of sound equipment. Wires have disappeared with evolving bluetooth capabilities and the work to reduce distortion and produce superior sound quality is ongoing.

Join the Sound Engineers, Videographers, SFX Specialists, Station Engineers, System Integrators and allied professionals expanding the use and improving the quality of sound in our world.

From the Floor to the Session Rooms, explore this year's offerings below.

Attractions and Pavilions

Spotlight Sessions

Audio-over-IP (AoIP) is occupying a growing share of the audio landscape in broadcast facilities today. In fact, AoIP, in the form of AES67, is integral to the new SMPTE 2110 standard as the transport mechanism for audio. In the growing field of At-Home Production, where audio and video are gathered at the remote venue but mixed and processed at the "home" production facility, AoIP enables many efficiencies at both the venue and the head-end. Wheatstone's Phil Owens and Lon Neumann will discuss audio ingest and processing at the venue, creating a venue- side router matrix with integrated mixing and processing, and the ability to create near-zero latency IFB. Mechanisms for sending the venue audio in the form of multicast streams back to the head-end will be covered, as well as ways to extend control of the venue audio to the head-end production facility.

View in Schedule

 

Stereo audio in cinema was demonstrated in the mid-1930's and a variety of stereo and multi-channel formats were utilized beginning in the 1950's, although most movies were released with a mono soundtrack. Dolby Stereo debuted in 1975, followed 17 years later by Dolby Digital 5.1. When Digital Cinema launched in the early 2000's it utilized a discrete 5.1 mix. Several years later 7.1 was added as a format, and in 2012 Dolby Atmos immersive audio began to be deployed in cinemas around the world. Barco AuroMax and DTS:X have entered the market as alternative immersive audio solutions. How do audio mixers create content that serves this diverse exhibition environment and what does the future hold?

View in Schedule

 

Audio and video over IP have been in use for quite some time now in media contribution and distribution. While discrete digital signal transport (i.e. SDI and AES/MADI) remained the most commonly used methods for media signal transport within production and broadcast facilities, recent technology developments have enabled IT- and IP-based transport methods to gain even greater traction in this last bastion of tradition. Initial technology bridgeheads pushed by individual company efforts, usually based on a blend of technology standards and proprietary seasoning, are now followed by an industry-wide consolidation. Industry alliances like AIMS, AMWA, MNA and VSF are bringing together individual technology achievements to form condensed, best-of-breed concepts based on existing broadcast workflows and proven IT standards. And standards organizations like AES and SMPTE are working hard on defining future-proof interoperability standards based on these concepts. If underlying technology acronyms like IP, UDP, RTP, PTP, SDP, SIP, SAP, SDN sound somehow familiar to you, and you have heard about industry alliances like AIMS, AMWA, MNA, VSF, JT-NM and the concepts they are promoting, such as TR03/04, AES67, NMOS, but are not really sure how all this relates to each other and to the work AES, SMPTE, IEEE and IETF are currently conducting, this session may be for you.

View in Schedule

 

ATSC A/85 has successfully reduced level-jumps between TV programs, and prevented audio quality from eroding through the kind of "loudness war" notorius in music production and radio. With much content now also being used for streaming and on social media, monitoring practice to ensure a good listening-experience for any type of audience is proposed. Special attention is devoted to not producing for a lowest common denominator, yet mainting good speech intelligibility across platforms. Monitoring based on loudspeakers and headphones is discussed, and details from new research on spectral calibration and level calibration of the listening environment are prodived. Finally, from a physiological point of view, the three major components of listener fatigue in broadcast and post production are described and rated.

View in Schedule