Re: compressed cache store

From: Robert Collins <robert.collins@dont-contact.us>
Date: 30 Oct 2001 00:01:04 +1100

On Mon, 2001-10-29 at 22:29, Roger Venning wrote:
> Hi guys,
>
> I saw a talk by a Hugh Williams (http://goanna.cs.rmit.edu.au/~hugh/)
> recently. During which he mentioned the result of one aspect of work on
> building fast web search engines - essentially compressed disk storage
> of objects that are later retrieved for summarisation. He claimed that
> the benefits of compression were that you could rip data off disk faster
> and (more dubiously) reduce seek time.

Well, less data to transfer = lower disk load and less chance of
fragmentation per object. Adrian's COSS and related stores reduce seek
time even more substantially, but without altering the data size.

> Does anyone on the list have any feelings

I'll give you my feelings, no hard stats to hand.

At the moment it's easier to add spindles to a squid server than CPU,
because squid can leverage multiple disks, but not multiple CPU's and
squid's upper bound is therefore the CPU.

In the future, if squid starts become an effectively multi-cpu server (
via threads, mutli-frontend-singlecache and IPC or whatever) then a
compress store may well be a good tuning option.

As for what gets sent to the clients, see the transfer-encoding work
that is complete for now, but probably currently borken. The point it
fell down is getting the upstream data through the disk store to the
client side, and you'll need to solve that problem first, before you can
address hinting to squid that the store is compressed/not compressed.
The transfer-encoding-modules branch has gzip et al compression for
transfer-encoding, so you can give a decoder chain a gzipped object and
tell it to spit out a chunked object, or a stream, or whatever.

Cheers,
Rob
Received on Mon Oct 29 2001 - 05:57:25 MST

This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:14:36 MST