Re: squid rewrite

From: Oskar Pearson <oskar@dont-contact.us>
Date: Thu, 5 Jun 1997 17:42:47 +0200 (GMT)

Dean Gaudet wrote:
>
> On Fri, 30 May 1997, Miguel A.L. Paraz wrote:
> > I think we can have both clean code and a single process.
>
> Oh another note from current Apache discussions (we're at the same
> phase right now, done with a major release and planning the future).
> An argument for multiple heavy-weight processes is that a seg fault or
> other problem won't take out your entire server. This is more an issue
> with apache where users tend to write buggy modules (nah we never code
> seg faults, yeah that's it). But just a point ...
Well squid doesn't die.... Only buggy programs die, and squid only
dies when I tell it to ;)

> If squid doesn't go multi-process or multi-threaded it can't take as
> much advantage of SMP systems. When you start considering HTTP/1.1
Agreed... in our case though we find that the CPU isn't the limiting
factor in any way... most of it is disk wait... This, as you say,
may not be true in the case of HTTP 1.1

Splitting squid into multiple processes presents various problems:
o You would need an "expired" (expire-dee) that will remove objects that
        are expired, and the individual processes will have to handle the object
        suddenly not existing without freaking out.

o You would have to do something like shared memory to know what
        objects are in the cache... otherwise each connection will use
        an expensive connection to a central process to find out what objects
        are/aren't in the cache.

o There would have to have some way of saying "I am locking the
        in memory-store to delete/add an object" You could do this with
        semaphores, I suppose... (anyone have experience in this - It's
        been described by someone I know as "not fun")

        Oskar
Received on Tue Jul 29 2003 - 13:15:41 MDT

This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:11:19 MST