How to Effectively Measure, Analyze, and Optimize QC for File-Based Broadcast Workflows

Over the years, the broadcast industry has shifted from tape-based to file-based workflows in an effort to increase operational efficiencies and reduce overall expenditures. Since that transition, file-based automated content QC solutions have emerged as the ideal method for ensuring superior content quality, providing increased cost savings and reliability over the traditional approach of visual inspection.

Yet, over the last decade, file-based media workflows have become much more sophisticated, putting tremendous pressure on broadcasters when it comes to ensuring content quality. While broadcasters used to get away with supporting a simple QC model of checking a file, reviewing its verification report, taking necessary actions, and then forgetting about the issue, this approach is no longer effective for long-term planning of content QC processes and making strategic changes. Broadcasters need advanced tools that give a more holistic view and deeper insights about how the content QC has happened over a long period of time and across different departments and sites.

This article describes a Measure, Analyze, Optimize (MAO) framework for incorporating data analytics in content QC processes. In addition to describing the benefits of this framework, the article will examine different use cases that illustrate how to get the best out of content QC.

When content QC solutions are used continuously, a large amount of QC data is generated. By analyzing this data, broadcasters can identify trends and gain a deeper understanding of content quality across the organization. These insights can help broadcasters make strategic improvements in the content QC processes and achieve greater organizational efficiencies.

The best framework for analyzing QC data is the MAO framework. The MAO framework is a three-stage process that starts out by tracking important QC performance metrics over the long term. Next comes analysis. During this phase, the metrics are analyzed to identify common patterns and operational issues; the analysis sheds light on areas of improvement and suggests required changes. For the optimization stage, those changes are applied to the content QC processes.

The cycle is then repeated. Using a file-based QC solution, broadcasters will take a second measurement, verifying that the applied changes have led to improvements in different QC performance metrics. New measurements are analyzed for additional issues and required changes.

The following sections will look at concrete examples of ways the MAO framework can be applied to improve content QC processes.

Asset Categorization

Broadcasters today are handling a huge number of assets. With so many files at their disposal, most broadcasters only have a vague idea of what types of assets they’ve acquired or created over the years. Cataloging all of the assets can be a time-consuming and expensive process. Whether or not a content management system (CMS) is being utilized, all assets usually pass through some level of QC.

QC reports contain a wealth of metadata information as well as error information about the files. The database where QC reports are stored can be exploited to analyze and categorize the assets. For instance, broadcasters can determine how many hours of content have been processed, what bit rates are being used, and which assets are HD files.

When transitioning from SD to HD, categorizing assets can be extremely useful, as it will give broadcasters an understanding of how much legacy content still exists. Figure 1 shows that a total number of 38,864 files had undergone QC. The files contained 24 different values of resolution. The most common resolution was 625(SD). About 60 percent of files were encoded at this resolution. The second most common value was 1080(HD), which covered about another 34 percent of files. There was one odd file with a resolution of 528×480.  Since there is a significant amount of SD assets, the broadcast organization may decide in this case that maintaining an SD-to-HD upconversion workflow is necessary.

Figure 1. Resolution of different files

QC Results Summarization

Performing an in-depth analysis of QC results is crucial for understanding the content QC process and ultimately reducing the number of files that are failing. For example, broadcasters can look at what percentage of files are failing due to an error, the different kinds of errors present in various files, and which errors are more prominent than others.

By digging deeper in the data, broadcasters can see how the failure rate is changing over time. Figure 2 shows how the number of tasks is changing from month to month based on success, failure, or warnings. In this particular example, there isn’t much improvement in the failure rate with time.

Figure 2. Month wise variation in success and failures

Looking at this data, broadcasters can identify the reasons why the failure rate is not improving with time. Once the reasons have been identified, the next step is to carry out the operational improvements and achieve decreasing failure rates.

There are several ways to reduce the failure rate data. One option is to restrict failures to specific watch folders to identify folders that have more problems. Alternatively, broadcasters can look at the same data for different types of content separately (e.g., HD vs. SD content). If the broadcaster is operating stations in multiple geographic locations, gaining insight into which site tends to have more failures compared with others can be valuable.

Broadcasters can also decrease the failure rate data by analyzing certain parameters like test plans, watch folders, content locations, checkers, etc. (See Figure 3.)

Figure 3. Task results by different criteria

Taking a look at the watch folders in Figure 3, it’s clear that the “Stories” watch folder seems to have a higher failure rate than others. Furthermore, the “SD Open Stories Test Plan” tends to have more failures than other test plans. These data points give broadcasters a clear plan of action with regards to where to focus attention to improve content quality.

The checker wise distribution is more useful for seeing if QC tasks are evenly distributed across different checkers. In this particular example it seems that two checkers are overloaded and are handling most of the tasks.

Another way to approach QC results is to look at the most common errors found across all tasks. In this case, broadcasters are advised to look at the files with specific problems in detail and identify if there are common causes in the workflow causing the problem in so many files. It’s important to note that sometimes this data may not be sufficient. Specific errors may be happening in fewer files but the number of occurrences of the error in those files is very high.

Capacity Planning

In order to have an efficient QC process, broadcasters must ensure that the QC system is lean and mean. There are several ways to address this:

Checkers shouldn’t be sitting idle.
QC tasks should be completed in a reasonable amount of time.
Higher priority tasks should be given resources accordingly.
Create enough capacity to handle peak load situations.
In the case of multi-site QC systems, the resources should be equitably distributed.
A regular review of distribution of resources and their usage should take place.

Figure 4. Core Utilization and Task Queue

Broadcasters can increase the average performance index by allocating more cores per task, disabling non-essential checks from the test plan, and/or ensuring faster access to content from checkers. While using more CPUs for the QC task may seem appealing, it doesn’t improve performance proportionally. Furthermore, while increasing the performance index may reduce the QC time, the system may be sitting idle if there is not enough content to be processed all the time. Adding more checkers to a QC system is expensive. Broadcasters will want to find a good balance between the need to get QC done fast and ensuring that the QC system is reasonably utilized all the time.  In this context, it is useful to divide the performance data according to various parameters. For instance, broadcasters may want to look at performance index for SD/HD/4K/8K files separately. This helps to decide which files require more cores to achieve better overall performance

Each check in a test plan adds to overall QC time. In particular, video/audio quality checks add significantly to the QC time. Reviewing the test plan QC results, broadcasters can determine which checks never fail in a particular workflow. Some of these checks may be required from a regulatory compliance perspective and should never be switched off. But others can be disabled altogether.

Sometimes different departments of the same broadcast organization purchase independent copies of QC systems. In the case of multiple offices in different locations, having independent QC systems cannot be avoided. The analytics system can import data from these different installations and provide a combined top level overview for strategic planning.

Conclusion

The Holy Grail for broadcasters is to provide an immersive and seamless television experience to viewers on every screen. As the content lifecycle continues to grow in the future, with broadcasters contributing and distributing content in new ways, having a structured and automated content QC approach that leverages analytics will become an even more essential part of modern file-based media workflows, ensuring broadcasters can deliver high-quality content.

Thought Gallery Channel:
Tech Talk