[squid-users] Distributed High Performance Squid

From: Joel Ebrahimi <jebrahimi_at_bivio.net>
Date: Wed, 19 Aug 2009 16:58:33 -0700

Hi,

Im trying to build a high performance squid. The performance actually
comes from the hardware without changes to the code base. I am a
beginning user of squid so I figured I would ask the list for the
best/different way of setting up this configuration.

The architecture set up is like this: There are 12 cpu cores that each
run an instance of squid, each of these 12 cores has access to the same
disk space but not the same memory, each is its own instance of an OS
and they can communicate on an internal network, there is a network
processor that slices up sessions and can hand them off to any one of
the 12 cores that is available, there is a single conf file and a single
logging directory.

The current problem I can see with this set up is that each of the 12
instances of squid acts individually, therefore any one of them could
try to access the same log file at the same time. Im not sure what
impact this could cause with overwriting data.

I actually have it set up this way now and it works well though it's a
very small test environment and Im concerned issues may only pop up in
larger environments when accessing the logs is very frequent.

I was looking through some online materials and I saw there are other
mechanisms for log formatting. The ones that I thought may be of use
here are either the daemon or udp. There is actually a 13th core in the
system that is used for management. I was wondering if setting up udp
logging on this 13th core and having the 12 instances of squid send the
log info over the internal network would work.

Thought or better ideas? Problems with either of these scenarios?
 

Thanks in advance,

// Joel

jebrahimi_at_bivio.net

Joel Ebrahimi
Solutions Engineer
Bivio Networks
925.924.8681
jebrahimi_at_bivio.net
Received on Wed Aug 19 2009 - 23:57:05 MDT

This archive was generated by hypermail 2.2.0 : Thu Aug 20 2009 - 12:00:04 MDT