Intelligent Automation Opens Door to Critical Processing Efficiencies
Today, content comes into the modern broadcast facility from numerous sources, which supply ads, program content, and other media in a diverse array of formats. A significant investment of time, human resources, and equipment is often necessary not only for evaluation and proper handling of inputs from a wide set of contributors, but also for appropriate processing and output of content in the formats and profiles dictated by various distribution outlets. These challenges have made it difficult for broadcasters, who are still feeling the pinch of difficult economic conditions and shrinking budgets, to improve content quality and/or expand their service offerings cost-effectively.
For years, well before the emergence of “anywhere, anytime” viewing of content on computers and mobile devices, broadcasters have been striving to capitalize on the potential benefits of file-based operations to improve efficiency, reduce costs, and enable rapid and cost-effective scaling of infrastructure along with changing business and technical demands. Now, however, with the rise of multiplatform content delivery and increased fragmentation of viewing audiences and ad revenue, it has become more important than ever that broadcasters find a way to leverage file-based processing solutions to assure the quality and integrity of their content, regardless of the platform on which it’s viewed.
To address the challenges presented by content exchange in multiple formats across multiple platforms, sophisticated file-based media processing platforms are applying a new level of intelligence to the automation of key tasks, including quality assessment and correction of content. Central to this streamlined model of media processing is the ability of the supporting platform to detect the state and characteristics of content, and then to use that “knowledge” to automate correction according to preset rules, standards, or specifications.
By extracting and indexing a rich collection of metadata as content is ingested and subsequently applying this intelligence at critical points in the video pipeline, file-based processing solutions are dramatically reducing the degree of manual intervention required to handle diverse inputs and outputs. Even as broadcasters ingest, transcode, edit, and otherwise transform content for distribution, they can rely on automated processing to ensure that media retains its quality and integrity. When robust intelligence on content is combined with powerful processing algorithms, the platform can even improve the quality and/or value of a broadcaster’s media assets through metadata enrichment and selective audio and image processing.
By uniting each step in this process on a single platform, rather than implementing them as a series of isolated solutions lacking “awareness” of their interdependencies, today’s file-based media processing platforms eliminate the costly duplication of commoditized resources — storage, I/O, and processing — required by discrete solutions, as well as the complexities of integrating those solutions.
Replacing the conventional media flow that relies on watch folders to facilitate the handoff of content between resources, the new processing platform uses its store of media intelligence to support an array of interdependencies and facilitate the combination of different applications to meet defined output objectives. Consequently, broadcasters can implement a file-based processing solution that enables a much greater degree of adherence to qualitative standards of media consumers while still delivering the quantitative output required to support large-scale multiplatform content delivery.
With this approach, broadcasters can enhance the quality of content, no matter what its original state. Typical applications today include audio legalization and color correction, but there is no limit to the specialized processing algorithms that can be brought to bear. The scalability of the file-based processing platform allows the broadcaster to assure that the underlying architecture has the power and agility to support processing demands as the multiplatform media landscape continues to evolve.