Motion-Compensated Prediction Method Based on Perspective transform for Coding of Moving Images

Atsushi KOIKE  Satoshi KATSUNO  Yoshinori HATORI  

IEICE TRANSACTIONS on Communications   Vol.E79-B   No.10   pp.1443-1451
Publication Date: 1996/10/25
Online ISSN: 
Print ISSN: 0916-8516
Type of Manuscript: Special Section PAPER (Special Issue on Very Low Bit-Rate Video Coding)
motion-compensated prediction,  perspective transform,  very low bit-rate image coding,  motion detection,  affine transform,  

Full Text: PDF>>
Buy this Article

Hybrid image coding method is one of the most promising methods for efficient coding of moving images. The method makes use of jointly motion-compensated prediction and orthogonal transform like DCT. This type of coding scheme was adopted in several world standards such as H.261 and MPEG in ITU-T and ISO as a basic framework [1], [2]. Most of the work done in motion-compensated prediction has been based on a block matching method. However, when input moving images include complicated motion like rotation or enlargement, it often causes block distortion in decoded images, especially in the case of very low bit-rate image coding. Recently, as one way of solving this problem, some motion-compensated prediction methods based on an affine transform or bilinear transform were developed [3]-[8]. These methods, however, cannot always express the appearance of the motion in the image plane, which is projected plane form 3-D space to a 2-D plane, since the perspective transform is usually assumed. Also, a motion-compensation method using a perspective transform was discussed in Ref, [6]. Since the motion detection method is defined as an extension of the block matching method, it can not always detect motion parameters accurately when compared to gradient-based motion detection. In this paper, we propose a new motion-compensated prediction method for coding of moving images, especially for very low bit-rate image coding such as less than 64 kbit/s. The proposed method is based on a perspective transform and the constraint principle for the temporal and spatial gradients of pixel value, and complicated motion in the image plane including rotation and enlargement based on camera zooming can also be detected theoretically in addition to translational motion. A computer simulation was performed using moving test images, and the resulting predicted images were compared with conventional methods such as the block matching method using the criteria of SNR and entropy. The results showed that SNR and entropy of the proposed method are better than those of conventional methods. Also, the proposed method was applied to very low bit-rate image coding at 16 kbit/s, and was compared with a conventional method, H.261. The resulting SNR and decoded images in the proposed method were better than those of H.261. We conclude that the proposed method is effective as a motion-compensated prediction method.