Abstract
I. Introduction
II. Related Work
III. Proposed Method
IV. Experiments and Results
V. Conclusion
Authors
Figures
References
Abstract
In this paper, we propose a fast encoding method to facilitate an affine motion estimation (AME) process in versatile video coding (VVC) encoders. The recently-launched VVC project for next-generation video coding standardization far outperforms the High Efficiency Video Coding (HEVC) standard in terms of coding efficiency. The first version of the VVC test model (VTM) displays superior coding efficiency yet requires higher encoding complexity due to advanced inter-prediction techniques of the multi-type tree (MTT) structure. In particular, an AME technique in VVC is designed to reduce temporal redundancies (other than translational motion) in dynamic motions, thus achieving more accurate motion prediction. The VTM encoder, however, requires considerable computational complexity because of the AME process in the recursive MTT partitioning. In this paper, we introduce useful features that reflect the statistical characteristics of MTT and AME and propose a method that employs these features to skip redundant AME processes. Experimental results show that—when compared to VTM 3.0—the proposed method reduces the AME time of VTM to 63% on average, while the coding loss is within 0.1% in the random-access configuration.
Introduction
The amount of video data has increased rapidly, especially with the growing use of Internet-based streaming services and devices that receive video broadcasts. The bandwidth and storage capacity of video applications is limited, requiring efficient video compression techniques. This need for video compression will further increase due to the higher resolutions of volumetric content such as 360-degree and high dynamic range (HDR) videos. Considering this diverse and growing demand for more powerful compression, a new video coding standardization project called versatile video coding (VVC) was launched recently by the Joint Video Exploration Team (JVET) of two expert groups: ISO/IEC Moving Picture Experts Group (MPEG) and ITU-T Video Coding Experts Group (VCEG). The JVET published the initial draft of VVC in 2018 [1] and released the VVC test model (VTM). VTM has a similar structure to the High Efficiency Video Coding (HEVC) test model (HM), but it uses advanced tools that provide better compression performance. A key concept among these tools is the multiple-type tree (MTT) segmentation structure [2].