From ajr@eng.cam.ac.uk Thu Aug 11 00:11:23 1994
Path: saips.cv.nrao.edu!hearst.acc.Virginia.EDU!concert!gatech!swrinde!pipex!lyra.csx.cam.ac.uk!nntp-serv.cam.ac.uk!ajr
From: ajr@eng.cam.ac.uk (Tony Robinson)
Newsgroups: comp.compression
Subject: Re: Rice Algorithm
Date: 08 Aug 1994 07:22:29 GMT
Organization: Engineering Department, Cambridge University, England.
Lines: 59
Message-ID:
References: <320dbq$35i@ucthpx.uct.ac.za>
NNTP-Posting-Host: cyan.eng.cam.ac.uk
In-reply-to: brendt@dip1.ee.uct.ac.za's message of 6 Aug 1994 16:22:18 GMT
In article <320dbq$35i@ucthpx.uct.ac.za> brendt@dip1.ee.uct.ac.za (Brendt Wohlberg) writes:
>
> Does anyone have any information/references relating to the Rice
> Algorithm for lossless image compression?
Yes - it is the simplest Huffmann coding that you can imagine.
The nth code is n bits long, hence has a probabilty of 2^{-n}. For
simplicity, start with a run of zeros and add a 1 terminator:
code value
1 0
01 1
001 2
0001 3
00001 4
000001 5
It can easily be extended to cope with any signal variance. For
example, add another two bits at the bottom end:
code value
100 0
101 1
110 2
111 3
0100 4
0111 7
00100 8
The optimal number of final bits to be added is:
n = \log_2 \left( \log(2) {\sigma \over \sqrt 2 } \right)
= \log_2 \left(\log(2) E(|x|) \right)
I think the proper reference to this work is:
@techreport{Rice91,
author= "Robert F. Rice",
title= "Some Practical Noiseless Coding Techniques,
{Part II, Module PSI14,K+}",
type= "JPL Publication",
number= "91-3",
institution= "Jet Propulsion Laboratories",
month= nov,
year= 1991}
It works suprisingly well. If you want to test your code out I can
provide a working implementation in:
svr-ftp.eng.cam.ac.uk:/comp.speech/sources/shorten-1.14.tar.Z
But this is designed to work on N independent data streams (i.e. stereo
audio), not images.
Tony Robinson
From bhaskara@fc.hp.com Thu Aug 11 00:11:57 1994
Newsgroups: comp.compression
Path: saips.cv.nrao.edu!hearst.acc.Virginia.EDU!concert!gatech!howland.reston.ans.net!vixen.cso.uiuc.edu!sdd.hp.com!hplabs!hplntx!bhaskara
From: bhaskara@fc.hp.com (Vasudev Bhaskara)
Subject: Re: IDCT algorithms
Sender: news@hpl.hp.com (HPLabs Usenet Login)
Message-ID:
Date: Wed, 10 Aug 1994 23:52:54 GMT
References: <324nd2$okc@styx.uwa.edu.au>
Nntp-Posting-Host: hplvab.hpl.hp.com
Organization: Hewlett-Packard Laboratories, Palo Alto, CA
X-Newsreader: TIN [version 1.2 PL2]
Lines: 31
Michael Simmons - faculties (michael@ecel.uwa.edu.au) wrote:
: I have been doing some reading up on IDCT algorithms
: Am I correct in saying that the algorithm presented in the paper,
: "A Forward-Mapping Realization of the Inverse Discrete Cosine Transform"
: by Leonard McMillan and Lee Westover of SUN,
: is the best for Microprocessor based MPEG decoding.
: I have read about 5 papers now and as far as I can tell they
: are all pretty similar.
: Is there any particular paper that any one could recommend to me.
:
The McMillan-Westover method uses a cache (essentially a lookup table) and
depending on your processor this may or may not be fast. In fact, for
faster decoding, the paper "Statistical IDCT for Image Compression",
Andy C. Hung and Teresa H.Y. Meng in SPIE, Vol.2187, Feb. 1994 shows the
decoding speed (in kilo blocks/sec) for the Westover method versus others
and depending on compression ratio and processor, the forward mapping
scheme was not that fast. for example, on the Lena image (for JPEG; the
same should hold for MPEG data too), on a Sparc, faster methods yield a
> 2X speedup over the forward mapping scheme and the same seems to hold
for implementation on a MIPS machine. the faster scheme is an extension
of the forward mapping approach wherein some symmetries are exploited in
the IDCT kernel.
We have a faster scheme which gives realtime MPEG1 decompression on the
HP workstations but the code is not in the public domain.
- Vasudev B.