Review: Moscow State University Video Quality Measurement Tool 14.1
Version 14 of the $995 Moscow State University Video Quality Measurement Tool (VQMT) has two significant new features and multiple enhancements that increase both utility and usability. If you’re an encoding professional who has somehow resisted this product before now, this new version should push you over the edge.
About VQMT
Some background: VQMT is a tool that lets you compare disk-based encoded files to the source and compute over 20 metrics, including VMAF, PSNR, SSIM, MS SSIM, and an alphabet soup’s worth of other metrics shown in Figure 1. You can compute most metrics for different color values like R, G, B, Y, U, V, or RGB and YUV, so the actual number of individual metrics that you can compute is easily in the hundreds, if not more.
Figure 1. The metrics available in VQMT 14.1.
Each time you compare files in the GUI (VQMT also has a very robust command-line version if that’s your preference), the program creates the Result Plot shown in Figure 2. This plots the frame score for each encoded file over the duration of the file. The graphic presentation lets you spot transient quality problems like those shown in red and blue at the front of the files and gauge quality variability. As an example, note that while the purple encoder doesn’t suffer the same lows as blue and red at the start of the file, overall, its quality shows significant variability, which will degrade QoE.
Figure 2. Comparing five files in VQMT’s ultra-useful Result plot
You navigate the Result Plot using the slider on the button or by clicking anywhere in the plot. At any point in the file, you can press the Show frame button on the bottom right in Figure 2. This loads the Preview window, which enables multiple viewing modes for comparing the encoded files to each other and the source.
To illustrate this point, Figure 3 is a screen grab of the Preview window that shows encoder 1, represented by the red line in Figure 2, on the left, and encoder 3, the orange/yellow line on the right. As you can see in Figure 2, encoder 1 was a poor performer; encoder 3 among the best. If you click in and view Figure 3, you’ll see much more artificating in the grass and the player uniform on the left that confirms the low metric score that you can see in the upper right of both windows.
Figure 3. Viewing the low frame to determine if the score represents actual qualitative differences that viewers would notice.
It’s important that these “low-frame” views are accessible because sometimes metric scoring differences don’t represent actual visible differences in the video files. Remember that metrics are predictors of how human viewers will rate the video files, so you always want to verify low-frame regions to make sure that they represent noticeable differences. With the Result plot and Show frame button, VQMT makes it exceptionally simple to do so. As you’ll see on the bottom left in Figure 6, you can also export Bad frames with each analysis, an automated way to verify and save the low-frame scores.
Back in the Preview window, you can view the frame in its original encoded quality (Figure 3) or a map of the per-pixel metric values.
I’ve configured the map in black and white, but you can choose many other color values. In the view shown in Figure 4’s map of the per-pixel metric values, the frame with more white space is the higher quality frame. On the bottom left of Figure 4, you see the frame views you can choose for the left side of the screen; the drop-down list on the right is for configuring the right-side view.
Figure 4. This shows the per-pixel metric values in black and white; lighter is better.
After choosing the encoded files and the display (frame or pixel map), you can position the output side-by-side, as shown in Figures 3 and 4, or top and bottom, or in a view with a slider that you can drag to the left and right to reveal one frame and hide another. You can also view each frame in full and then toggle the next via the drop-down list shown on the bottom left of Figure 4 or the Ctrl+ controls shown. Overall, it’s a fabulous mechanism for exploring the quality deficits reported by the metric values.
Each time you analyze a file or files, VQMT produces a CSV file with the data shown in Figure 5, which I’ve imported into Google Sheets to better illustrate the contents. This is the VMAF analysis; for each file, you see the mean, harmonic mean, min/max values, min/max frame numbers, as well as standard deviation and variance, both measures of quality variability. Below those summary metrics are the individual frame scores for each file.
Figure 5. Numerical results reported by VQMT
Overall, VMQT lets you easily compute over 20 highly configurable metrics, visualize the results, view the actual frames, and import the numerical results into a spreadsheet. It’s a program that I use on multiple computers in my office nearly every day.
Here are the major additions to version 14.
Compare More than Two Files in the GUI
Probably my most frequent frustration with VQMT to date has been the inability to compare more than two files at one time. This is gone; now you can compare an unlimited number of files, subject to memory limitations. I tested up to ten and had no issues.
You see this in Figure 6, where I compare five HEVC encoded files to the same source, which is the analysis that produced all the results presented above. To add additional files, you click the small blue Plus button on the extreme lower right; to delete any processed slots click the Minus button to the right of the slot.
Figure 6. VQMT can now process seemingly unlimited files.
Once you go beyond five or six files the graphs get too busy for easy interpretation and the CSV too crowded for data extrapolation. Still, it’s remarkable how many times you want to compare 2–5 files and visualize the results, and the ability to do so is a great enhancment.
Python Interface
The other major feature addition (in Version 14.1) is a Python interface for simplified VQMT scripting that you can read about here. As described on that page, the package is a wrapper over VQMT that will “load the library as an instance of msu_vqmt.SharedInterface class and will initialize VQMT. Using this object, you can run any number of measurements without reinitialization.”
There are multiple examples provided, including installing, activating, and running the metric (shown in Figure 7), plus troubleshooting, running asynchronously, grabbing values interactively, and using a numpy matrix as a source of input frames. In general, Python is a more flexible and powerful tool for running VQMT and processing the results than command-line scripting, and the Python interface will make the program more useful for those with Python programming capabilities or access to the same.
Figure 7. Using Python to run MSU VQMT
Other New Features: Measurement History
Sharp-eyed VQMT users might have noticed the new multi-line blue icon to the right of the Start icon on the bottom left of Figure 6. This opens the Measurement History window shown in Figure 8, which conveniently stores previous measurements made by the program. Not only can you immediately grab the scoring data from the table; you can also recall the measurement and re-display it in the Result plot. So long as all the files are still in the same location, you have full use of the Result plot, including the ability to view compressed frames as detailed above.
Figure 8. Recalling a previous measurement from the Measurement History window to the Result window
Depending upon the duration of your files and the selected metric, it can take up to a few hours to compute the metrics, and processing time will only increase as you add additional files. Previously, once you closed the Result plot, the analysis was gone, and if you wanted to revisit the visual results, you had to rerun it. The new Measurement History window will prove incredibly useful in practice.
Also new in 14 is the ability to manually save results file outside of the Measurement History window, which you can later retrieve and redisplay. This is a nice complement to the Measurement History that lets you store the results file in a separate folder to preserve it with other project-related files.
Other New Features
Other new features include super-fast SSIM and MS SSIM modes that, in my tests, cut the processing time of a 73-minute video file from 9:22 (min:sec) to 3:20. I also computed the scores using the precise model and show all results in Table 1. While the scores are all different, the range is tiny, and the respective quality conclusions would be the same with all results. Both fast and superfast modes look to be useful additions for those who rely upon SSIM and MS SSIM.
Table 1. The new Superfast mode for SSIM and MS SSIM decreased processing time by 191% with minimal impact on the metric values.
If you work with exceptionally long files, you’ll find that MSU has accelerated all zooming, sliding, and other navigational movements in the Result plot. I evaluated this enhancement after measuring VMAF on a 105-minute movie. With version 13, moving in the Result plot was jerky, with slight delays as I navigated until the GUI caught up. With version 14, navigating in the Result plot was instantaneous, no different than working with a 10-second test file.
Much More Functional Free Version
Finally, version 14 contains a much more useful free version than previous versions, which were all limited to files below 720p in resolution. The new free version now lets you analyze files up to 8K resolution but lacks the features shown in Table 2. Note that Table 2 is a truncated version of the table shown on the MSU site that we cut for spacing reasons; the free version has many other features not shown in our table. All MSU asks is that you use the free version for personal work, not for business. All told, the free version is now a much more compelling try-before-you-buy option.
Table 2. Features and pricing of the four VQMT versions
From my perspective, VQMT has long been a critical tool for all encoding professionals, and the new features and enhancements delivered in version 14.1 makes it even more so.
Related Articles
The head of the Moscow State University Graphics and Media Lab—the people behind VQMT and Subjectify.us—offers his insights into objective and subjective metrics, as well as VMAF hacking and AV1.
13 Dec 2021
As a metric, Apple's Advanced Video Quality Tool (AVQT) showed some bright spots, but it's hard to see it bumping VMAF or SSIMPLUS from real-world workflows without a lot more verification.
14 Oct 2021
The good news: As always, Moscow State's codec studies are some of the most comprehensive available. The bad news: Unless you're TikTok or Tencent, you won't have access to some of the best performers.
11 Jun 2021
If you're serious about experimenting with different codecs and/or encoding parameters, MSU's Video Quality Measurement Tool is an essential tool, and version 13 brings some welcome improvements.
01 Jun 2021
Jan Ozer discusses real-world applications for per-title and per-category encoding, where applying objective quality metrics to recalibrate an encoding ladder is effective and also times when it's not as helpful.
05 Aug 2020
Jan Ozer outlines the basics of objective quality metrics, from their recent evolution to their method for predicting how human eyes perceive video quality to the best metrics available now.
03 Aug 2020
Companies and Suppliers Mentioned