Re: squid rewrite

From: Henrik Nordstrom <hno@dont-contact.us>
Date: Fri, 06 Jun 1997 08:58:08 +0200

My beleief is that if squid is being splitted, then it should be
splitted in function.

* A backend store manager
* The backend DNS daemons
* Frontend network handlers
* Backend handlers for things that are not well tested/integrated. (like
ftpget in current release)
* A supervisor, that checks that everything is OK, and restarts things
that crash.

The frontends communicate with the backends wia a multiplexed channel
(like the one used for dnsserver). No need to open a separate connection
for each request.

It is probably best to let each fronend read/write the disk, and only
use the backend store manager to find out where to read/write.

ICP is probably best placed into the backend store manager by latency
reasons.

Example on operations frontend <--> store manager

* Where is XXX stored? (locks for reading)
* Done with XXX
* Gimme a new file where I can store XXX
* XXX is now public (to let other requestors jump on the train)
* Done storing XXX
* Aborted store of XXX

The only thing I see that will cause a great deal of headache is the
"multiple-readers-while-fetching", where other clients than the first
requestor might jump on a started request, but if the store manager
keeps track of which frontend that is handling the request it is only a
matter of internal proxying (the additional frontends does a proxy-only
request to the frontend fetching XXX).

Another issue is very large objects, as always. But that is also the
case with the current design...

Shared memory and semaphores are best let alone if we are thinking of
being portable... and if using a central manager then it can be avoided
without to much performance degration.

This design have the pleasant side effect of getting rid of the max file
descriptors per process problem. Each frontend has it's own file
descriptors (both net and disk), and it is only needed to start another
frontend to have a new set... (assuming that the kernel have large
enought tables in total...)
 =

---
Henrik Nordstr=F6m
Oskar Pearson wrote:
> Splitting squid into multiple processes presents various problems:
> o       You would need an "expired" (expire-dee) that will remove objec=
ts that
>         are expired, and the individual processes will have to handle t=
he object
>         suddenly not existing without freaking out.
> =
> o       You would have to do something like shared memory to know what
>         objects are in the cache... otherwise each connection will use
>         an expensive connection to a central process to find out what o=
bjects
>         are/aren't in the cache.
> =
> o       There would have to have some way of saying "I am locking the
>         in memory-store to delete/add an object" You could do this with=
>         semaphores, I suppose... (anyone have experience in this - It's=
>         been described by someone I know as "not fun")
> =
>         Oskar
Received on Tue Jul 29 2003 - 13:15:41 MDT

This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:11:19 MST