As a content creator, it’s important to understand why Quality Control is an essential part of the post-production process. Quality control helps to ensure that any project you work on meets the necessary requirements needed to ensure your content can be distributed, no matter whether it’s only being published on YouTube or going global on Netflix or to a traditional broadcaster. If your project fails its Quality Control, it may end up costing you both time and money. This can make or break the release of your project.
Lightworks is the only NLE with a native integration into a Video AQC (Automated Quality Control) platform - QScan - allowing video editors for the first time to be able to automatically QC their clips and sequences in their editing timelines helping to deliver finished projects to distributors or publishers faster than ever before and 100% accurate for their needs.
Modern distributors may need to present material in a wide variety of circumstances. A single production might be delivered to theatrical, broadcast and online channels at various resolutions and aspect ratios, in different file formats with different codecs and colourimetry, sound mixes, and several HDR implementations. Distributors expect to be able to import files to content distribution systems and broadcast servers as a zero-intervention process and will perform analysis to ensure that can happen without problems.
Because QScan can analyse a vast range of characteristics of a media file, it includes templates reflecting the requirements of many key distributors and covering a wide variety of everyday situations. For conditions not covered by a template, one can be modified or a new one created from scratch.
QScan can analyse a huge range of files via a web browser interface and become part of any workflow. Integration with Lightworks means that QScan can link users directly to a timeline region where problems have been detected, making fixes easy.
There will always be a place for an eyeball assessment, particularly including material from various sources which may legitimately vary in both aesthetic and technical quality. A lot of time can be saved by automatically checking technical requirements with QScan, especially given that many potential problems can't be seen with the naked eye. The human eye was designed to simply absorb the world around you, rather than actively QC it.
Even relatively large discrepancies in codec settings, audio encoding, colourspace or out-of-bounds brightness may not be immediately apparent even to trained eyes but might still cause a supplied file to fail a client's internal QC checks.
The analysis performed in a AQC workflow covers a massive number of frequent requirements and common problem areas, including file formatting, audio and video content, colour and brightness encoding, compression and audio levels, among others. Not every test will be relevant to every deliverable, although most will need at least something from each area of analysis QScan covers.
Readability of a header requires that the file is in the correct container format, often something like MXF, which is frequently indicated both in the filename suffix and in a magic number in the first handful of bytes in the file. The file size can be checked to ensure it isn't unreasonably large or small for the anticipated material.
Having determined the container format, the first part of a file read by any piece of software will be its header, a comparatively small region at its beginning that describes the most general information about the material the file contains. Depending on the container format in use, QScan can check duration, number of video and audio streams, captioning, timecode, including both the presence of timecode and its start point, the name of the production, and other technicalities.
Broadcast delivery often involves IMF or MXF file types. An IMF package may affect several separate files representing different audio or video aspects, all of which must be present and correct, properly linked in the master file and correctly identified in their internal data. MXF files must be of the proper format and version, with the right internal layout, including the file’s partitions, proper wrapping, index table and essence container configuration.
One complex problem with the way modern applications store video and audio is that the same information may be duplicated at various points in a file – particularly things like aspect ratio. For instance, most codecs can store aspect ratio, so it is theoretically possible to have an MXF file containing H.264 video where the MXF file header and the H.264 data disagree on the nature of the content. This shouldn’t happen and generally won’t, but problems can occur when two different devices read the same data from different places – and find it isn’t actually the same data.
Early digital imaging simply stored a list of numbers representing each pixel's red, green, and blue values. Modern formats intended to store video are likely to use one of a wide variety of compression methods, reordering, and subsampling pixel data before the mathematical compression of the codec comes into play.
Straightforward aspects of the picture include the resolution and frame rate of the video, as well as both the coded and display aspect ratios. Characteristics relevant only to interlaced-scan video, including scan type and field order are verified. The pixel format describes how the three values for each pixel are stored and may involve issues of bit depth and chroma subsampling. An active format description may be included which describe aspects of the image such as its aspect ratio and active area (colloquially, the part of the picture which might be considered overscan).
Problems here don’t usually create a situation where (for instance) non-RGB data is mistaken for RGB data; that would create an unwatchable mess. Insidious mistakes might include inadvertently creating an (entirely consistent and correct) RGB file when non-RGB was required, or an 8-bit file where 10 was required. Those problems are generally very hard to spot, even with experienced eyes, though they may still cause a quality control failure when the file is delivered.
For codecs which use the differences between frames to improve encoding performance, the structure of each group of pictures is checked, as well as the bit rate and the bit rate mode, which may affect how bit rate varies with picture complexity on a moment to moment basis.
Many codecs have various features which trade complexity for compression performance and can be switched on or off as required. Some codecs use published profiles, levels and sublevels which describe which of these features are in use; improper compression technique can affect compatibility with broadcast signal chains.
By far the most common codecs in broadcast delivery are from the MPEG family, including MPEG-2 and H.264. Both have a large selection of optional compression techniques. Avoiding higher-complexity techniques means poorer compression performance and worse picture quality than is really required; using them indiscriminately may cause compatibility problems, since not all devices support all compression techniques.
ProRes and DNxHD are simpler, although their performance is not as good and since they will be recompressed before distribution, there’s less control over the actual picture that is broadcast.
Colour pictures are described using red, green and blue primaries, or, often, an alternative that's mathematically converted from red, green and blue. Many file formats can encode which shades of red, green and blue are being used, something described as colour space or by specifically describing the colour primaries in terms of their coordinates on a CIE 1931 chromaticity diagram. Problems here can mean distortion of colours. Some files may also describe matrix coefficients which describe colour conversion processing which was performed when the file was created, or processing which should be done before the picture is viewed.
While file formats may be capable of describing very saturated colour, deliverables are sometimes required to restrict the ranges of colours they use, particularly with HDR material. Checks for gamut errors ensure that all of the pixels in the material are within those limits. Data on the actual mastering display colour primaries used during grading of the material may be included, as are measurements of the absolute mastering display luminance in terms of its maximum and minimum levels, or brightness.
Different distribution technologies treat brightness differently, particularly in terms of the range of values in a file which are deliberately used to represent levels of brightness. The highest and lowest luminance levels and both headroom and footroom violations can be detected in the same way as chroma levels, and overall brightness and contrast analysed for inappropriate extremes.
For decades, colorimetry according to the ITU’s Recommendation BT.709 (for HD material) or 601 (for SD material) were more or less the only options, and while things varied very slightly on each side of the Atlantic, the situation was relatively straightforward.
Invention of new techniques, including digital cinema exhibition (with DCI P3 colour), HDR (often with Rec. 2020 colour) and cinematography cameras (with various manufacturer-specific colour encodings) have provoked the development of a lot more ways of encoding colour and brightness.
As an extension to brightness encoding, High Dynamic Range (HDR) signalling is most ofen described in an HDR format such as HDR10 or Hybrid Log-Gamma. Various formats require different types of metadata, particularly Dolby Vision metadata. An absolute maximum frame average light level (MaxFALL), representing the overall brightness of the brightest frame, is often recorded. Similarly, an absolute maximum content light level (MaxCLL), representing the single brightest pixel in the entire production, is used by some displays to tailor the content to that display’s particular capabilities.
Checks described under colorimetry above may apply especially to HDR material where new formats are capable of describing highly saturated colours that some domestic displays struggle to handle cleanly, and that distributors may choose to avoid.
HDR is a complex topic. Any facility intending to set up for Dolby Vision mastering, as is generally required by the highest-end clients, requires consultancy and licensing from Dolby and specifically-configured hardware, with QScan useful in verifying that the resulting material is properly described.
Comparatively simple errors are among the most common. Long periods of blackness or frames of a constant colour might represent a shot missing from a timeline, while frozen frames, blurring or blocking are often due to improper compression.
Artefacts arising from the use of interlacing in a project which doens’t use interlacing, improper field order or field dominance problems may cause ugly visual problems or even stroboscopy in motion. Improper letterbox or pillarbox presentation of material not the same aspect ratio as the final image might indicate problems in interpretation or conforming of files.
Assessing overall image quality automatically allows users to minimise the involvement of humans by detecting the most common problems early. Excessive noise is detected, and QScan provides an overall picture quality score, but also detects issues caused by a lot of other problems. Tiny, inadvertent changes in files or dropouts on tape – digital dropouts – can cause anything from a practically invisible single-pixel glitch in a single frame to a large, brightly-coloured and obvious error that propagates across several frames. Blanking errors, often caused by scaling, repositioning or stabilisation of material, are seen as columns or rows of black pixels at the edge of frame. Artefacts due to a dead pixel in a camera are also detected.
Picture quality analysis also detects flashing and strobing effects, particularly those which may affect people with photosensitive epilepsy.
While the data load of audio is typically smaller than that of picture, many of the same problems can occur. QScan checks for the correct bit rate, sample rate and bit depth. Audio may also be compressed, in the sense of file size compression rather than dynamic range compression, creating potential issues with the correct codec and bit rate mode, which may affect how the system handles sudden excursions in bitrate during high-complexity parts of the recording.
The number of channels is verified, which is especially useful where a production might deliver a single file containing versions in different languages, stereo, or multipoint surround. Simply playing the file in a conventional media player might not provide access to all of those audio channels.
Lower-level technical details such as the mapping, wrapping and block alignment may also be critical for compatibility, particularly, with certain hardware playback devices.
Loudness is a bugbear of audio mastering, with any of several different measures preferred by various distributors.
QScan offers several approaches, and supports true peak level analysis, as well as simple verification that the file actually has audio, the audio tracks are not mute, nothing is clipping and the frequency response and phase, including mean audio phase, are correct.
Our in-house editor, David Winter takes you through how improve your video content in Lightworks with Automated Quality Control