Wavelet Image Coding with Context-Based Zerotree Quantization Framework

Kai YANG  Hiroyuki KUDO  Tsuneo SAITO  

IEICE TRANSACTIONS on Information and Systems   Vol.E83-D   No.2   pp.211-222
Publication Date: 2000/02/25
Online ISSN: 
Print ISSN: 0916-8532
Type of Manuscript: PAPER
Category: Image Processing, Image Pattern Recognition
image compression,wavelet transform,  zerotree quantization,  adaptive quantizatin,  source coding,  

Full Text: PDF>>
Buy this Article

We introduce a new wavelet image coding framework using context-based zerotree quantization, where an unique and efficient method for optimization of zerotree quantization is proposed. Because of the localization properties of wavelets, when a wavelet coefficient is to be quantized, the best quantizer is expected to be designed to match the statistics of the wavelet coefficients in its neighborhood, that is, the quantizer should be adaptive both in space and frequency domain. Previous image coders tended to design quantizers in a band or a class level, which limited their performances as it is difficult for the localization properties of wavelets to be exploited. Contrasting with previous coders, we propose to trace the localization properties with the combination of the tree-structured wavelet representations and adaptive models which are spatial-varying according to the local statistics. In the paper, we describe the proposed coding algorithm, where the spatial-varying models are estimated from the quantized causal neighborhoods and the zerotree pruning is based on the Lagrangian cost that can be evaluated from the statistics nearby the tree. In this way, optimization of zerotree quantization is no longer a joint optimization problem as in SFQ. Simulation results demonstrate that the coding performance is competitive, and sometimes is superior to the best results of zerotree-based coding reported in SFQ.