RE: Little questions

From: Gonzalo A. Arana <garana@dont-contact.us>
Date: Sat, 8 Nov 2003 09:39:30 -0300

>
> On Sat, 2003-11-08 at 08:18, Gonzalo A. Arana wrote:
> > > My instinctive reaction is to run the compression &
> > > decompression in a separate thread. If the queue to the
> > > compression/decompression engine is
> > > large, decrease the compression level used.
> >
> > Good instinct :-)
> > Squid-3 does not have an API for running jobs asyncrhonously, right?
>
> Not as such. However, a little generalisation of the aufs or
> diskd queueing mechanisms will likely do what you need. There

Entirely true.

> are other constraints you'll need to address though. I
> suspect you'll want (from my arch repository
> robertc@squid-cache.org--squid: my squid--diskio--3.0 branch
> (separates out all the storage layout stuff from actual
> diskio, which will ease generalisation of the threaded
> engine)

Sounds just like what I need.
Thank you very much.

> my squid--mempools--3.0 branch (removes global variable
> manipulation from mempool alloc and frees, which allows
> separate pool allocs to be thread safe, but not those from
> the same pool. You'll want to have a sync layer over whatever
> allocator your compressor/decompressor use).

Ouch! zlib uses malloc and I can't change that without patching zlib,
can I?
So, no mempools can be used for those mallocs, right?

Btw, should I download from your arch all files
squid--disk-io--3.0/patch-+([0-9])/squid--disk-io--3.0--{base,patch}-+([
0-9]).tar.gz ? And then apply all patches found on each .gz in order?
I find this 'arch' thing quite confusing. Sory if this is a silly
question.

>
> Rob
> --
> GPG key available at:
> <http://members.aardvark.net.au/lifeless/keys.> txt>.
>

G.
Received on Sat Nov 08 2003 - 05:39:30 MST

This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:20:46 MST