Re: Features

From: Dancer <dancer@dont-contact.us>
Date: Tue, 25 Nov 1997 00:01:36 +1000

We average about 2%. Mostly, my qualms about memory usage come from the requirements of
having the uncompressed object and the compressed object in memory simultaneously during
compression. Essentially (for safety) you take two buffers of the size of the original
object.

Now, you can do it in pieces...I did it when I implemented the streaming compression on
top of zlib, which bypasses the memory hog problems, but there are a couple minor
restrictions on how many bytes you can feed in per compression call, and how many on a
decompression call. (multiples of 3 and 4). 24 works well as an input size and 32 as an
output size. Perhaps some of those might get relaxed since mostly they involved integrity
hashing for encryption. I'll pull out the code and see if I can't code a compression
routine into sparent 1.4. I know _I_ could use it, and then I don't have to shoehorn it
into Squid. Someone else can do that later.

D

Gregory Maxwell wrote:

> On Mon, 24 Nov 1997, Dancer wrote:
>
> > ZLIB is the obvious thing to use. At it's simplest, a cache item could be compressed
> > before storage, and decompressed for transmission.
> >
> > Downsides: If in block mode: Considerable extra memory requirements (double the
> > object, approximately), temp space to store decompressed objects that are 'in
> > transit' (either coming in, ready to be compressed, or going out), CPU load.
> >
> > Stream compression: This is it's own bag of worms. I've _done_ this....and in fact,
> > I have a library that does block/stream encryption via zlib. It was somewhat hellish
> > to implement, but it works. It was intended for use in HTTP, and ties to an
> > encryption library, but the encryption could be cut out of it. It encodes compressed
> > data to 6-bit printable characters (not standard base-64), but to a quickish
> > encoding that I cooked up at the time (we actually didn't want to be base-64
> > compatible when it was written).
> >
> > I might look into how this might be patched into Squid....it might be useful for
> > inter-cache transfers.
> >
> > D
>
> I would also suggest you examine LZO... It has VERY low memory
> requirements in most of it's modes for both compression and
> decompression.. Decompression is always very fast, there are several
> compression modes for every decomprssor that have differnt time/comprssion
> tradeoff levels..
>
> I would suggest squid provided both zlib and lzo..
>
> Zlib would be suitable for slower links, and faster computers, while I
> expect lzo to have more real world benifits.
>
> With lzo squid could even change it's compression mode depending on system
> load.. With some of the faster LZO compression modes a typical p100 could
> easily saturate 100Mb/s ethernet..
>
> How many of us have caches with more then %20 cpu in use? Not many I bet,
> unless the cache is doing other things....
>
> Why not harvest that cpu?

--
Note to evil sorcerers and mad scientists: don't ever, ever summon powerful
demons or rip holes in the fabric of space and time. It's never a good idea.
ICQ UIN: 3225440
Received on Mon Nov 24 1997 - 07:05:38 MST

This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:37:43 MST