Re: Log Rotating Script/Utility

From: Stefan Monnier <monnier+/news/lists/squid@dont-contact.us>
Date: 09 Jun 1997 14:20:06 -0400

Graham Toal <gtoal@vt.com> writes:
> much of the data in a squid cache is compressible. Anyone have an inactive
> cache image they can compress to see what savings they get?

Most files are fairly small, and most compression systems only work at the file
level, so you won't get too good a compression on most files (of course, you
will still get good results on the .ps files that many people are stupid enough
to put on their web sites (hint: please use .ps.gz files to reduce my
waiting-time by about a factor 4, thanks)).

> PS I know the 'correct' way to do this is to use a compressing file-system
> layer on top of the real filesystem, but most of us don't have that luxury
> available.

I've moved my personal news spool to a compressed file-system and see
compression by a factor 2.5 on average. I would expect pretty good compression
for squid also, (except for .jpg, .gif and .gz files, of course), but it
would probably require the switch from a filesystem-based database to seomthing
like Berkeley DB in order to get big files (I hope this is already on the 2.0
todo list).

        Stefan

PS: the compression I use for my news spool is file-based (and can be turned
    on/off on a file basis) and I use it in offline-compression mode (no
    compression is done automatically because it's done in the kernel
    non-preemptively and produces annoying interruptions, the decompression
    is unnoticeable, tho).
PPS: it would probably be handy if the web-pages were put in different DB files
    depending on their type, so that you could easily decide to compress html
    but not jpeg.
Received on Mon Jun 09 1997 - 11:25:13 MDT

This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:35:29 MST