RE: [squid-users] Is it possible to handle 200reqs/s?

From: Henrik Nordstrom <hno@dont-contact.us>
Date: Thu, 12 Feb 2004 11:08:45 +0100 (CET)

On Thu, 12 Feb 2004, Chris Wilcox wrote:

> I'm interested in the same question! How did you optimise Squid to get such
> an improvement?

things one need to look into

* Disk subsystem. Use more than one drive and one of the async cache_dir
types. One cache_dir per physical drive, maybe more if using diskd.

* If using diskd, remember to carefully read the Squid FAQ on how to
configure the OS to support diskd.

* Number of filedescriptors. The default of 1024 is not sufficient for
high loads.

* Total number of sockets, file handles etc allowed to be open in the
system, and per-process limits of the same.

* SYN backlog size in the OS settings. Especially if you have WAN or
Dialup users connecting.

* Unbound TCP port range available for outgoing connections. Some systems
default with a range of only 4000 ports which quickly run out when
approaching 150-200 req/s.

* Sufficient amount of memory, and in some cases OS tuning to allow for
large processes. Some OS:es also require the swap to be disabled to
prevent swapping even if there is sufficient memory.

Then monitor the system and tweak things until you see desired results.

Regards
Henrik
Received on Thu Feb 12 2004 - 03:08:50 MST

This archive was generated by hypermail pre-2.1.9 : Mon Mar 01 2004 - 12:00:02 MST