Hierarchical Vector Quantization is a form of VQ where the quantization
of an image vector is broken up into a series of quantizations on subcomponents
of the vector, and in turn the indices resulting from each pair of those
quantizations can be fed into another quantization, cascaded until the
overall image vector quantizes to a single index. This series of quantizations
are managed such that each of the individual quantization steps can be
precomputed, and performed using a single LUT. The HVQ encoding process
then becomes a series of LUTs cascaded. This results in an extremely fast
encoding algorithm, since no computation is required.
To illustrate, let's start with a 4x4 vector containing 8-bit values
which represent spatially adjacent pixels in a grayscale image. A full-search
VQ process compressing to 8-bit indices would require a lot of computation
The HVQ setup used by IPCore breaks this 4x4 into a cascaded series
of four LUTs. Each of the 2x1s in the 4x4 are first fed into a LUT of precomputed
VQ indices. Each pair of indices corresponding to the 2x2s are then indexed
into a 2x2 LUT of precomputed VQ indices. Each pair of 2x2 indices are
combined to index a 4x2 LUT. And finally the remaining 4x2 indices are
used to index a 4x4 LUT, resulting in one index for the entire 4x4 vector.
HVQ Block Diagram
Each of the 4 stages in the HVQ process reduce the amount of information,
such that the size of each LUT is a maneagable size (64k). The quantization
process is then only a series of look-ups and bit shifts, making the encoding
process extremely quick.
The decoding process is even quicker, as the 4x4 index corresponds to
a table of 4x4 vectors directly, just as in full-search VQ. Decoding is
then another LUT, and is also extremely quick.
Depending on the tolerable quantization level, and expected quality,
fewer than all four stages of the HVQ may be used. Output from the four
stages yield compression ratios of 2:1, 4:1, 8:1, and 16:1.
When Should HVQ Be Used
Since HVQ is so fast, it is an attractive solution to compression where
speed is a critical factor. Results show that the quality is acceptible
at a compression ratio of 4:1, when quantizing high-resolution documents
at 400 dpi or greater. On this type of image, compression is over four
times faster than JPEG, whereas for the same quality JPEG might typically
yield 16:1 compression or better. Depending on the content of the image,
HVQ followed by LZW does produce comparable quality, and comparable compression,
but at the added cost of LZW compression.
Here is an example of the results when using LZW on HVQ indices:
Compression (4:1 HVQ) (8:1 HVQ) (16:1 HVQ)
Ratios 2-level 3-level 4-level
ViseMixed1 6.39 11.19 21.36
ViseTxdp 97.93 110.07 161.54
VisePict1 4.66 9.08 18.22
VisePict2 4.36 8.50 16.70
Note that the HVQ indices of the text images compress much more with LZW.
Other Benefits of HVQ
Compressed Files Can Be Viewed
By reordering the codebook vectors before the LUTs are computed, VQ indices
can be made to correspond to the relative intensities found in the corresponding
codevectors. In other words, if I reorder the codebook vectors such that
the first vector is the vector with the least accumulated intensity, and
the last vectors is the vector with the highest accumulated intensity,
then the VQ indices correlate with codevector relative intensity, and the
compressed image appears to be a crude image.
Fast Image Processing by Processing the Codevectors
Several image processing routines may take advantage of the fact that an
HVQ compressed image represents more spatial information than it appears
to. Since each index corresponds to a spatial vector of size 2x1, 2x2,
4x2 or 4x4, image processing routines that don't interfere with the indexing,
or can be made to operate on the codevectors instead, will yield faster
throughput when HVQ has already been performed:
An image compressed to 4x4 indices can easily
be decompressed into the 4x4 codevectors directly, or if a resolution change
is desired, the 4x4 codevectors may be transformed into vectors at a different
resolution, and/or depth.
For example, let's say we have an image compressed to 4x4 indices, and
instead of 4x4 vectors at 8 bpp, we desire 8x8 vectors at 1 bpp. Converting
the codevectors from 4x4x8 using a Floyd-Steinberg error diffusion that
confines the diffused error to remain within the resulting 8x8 region,
we can create a second LUT of 8x8x1 codevectors, and decompress using this
Tonal Reproduction Curve Shift
Since the HVQ indices are only a mapping into the codevectors they represent,
a TRC mapping may be applied to the HVQ codevectors before decompression,
instead of operating on the entire image.
Since an image of 4x4 HVQ indices is sixteen times smaller than its precompressed
counterpart, both the indices themselves and the HVQ codevectors may be
rotated in ninety degree increments before decompression, to yield a rotated
Compression & Classification Group has done some work on
and recursive HVQ.
Hierarchical Table-Lookup Vector Quantization for Images by Chaddha,
Mehrotra, and Gray from Stanford.
Image Compression at UMBC.
For more information, contact