Re: Is squid unable to handle the load?

From: Chet Murthy <chet@dont-contact.us>
Date: Tue, 26 May 98 13:34:24 -0400

Is it clear that the problem is with queueing up for the disk?

From what I read below, it looks like you can turn redirection off,
after having enabled it and elicited the problem.

If you do this, and then wait a while, you should (i) see all the
connections break, eventually, and then (ii) the squid should be fast
enough again.

If you don't see this behaviour, it seems to me that there's a likely
chance that there's some other problem happening.

The thing I'm not sure about is the shutting-down of connections, if
you change the network routing.

However, you ought to be able to get the same effect, by HUPing the
server. The key question in my mind is whether in fact this is a
problem that has to do with the "recent workload", or whether it is
something deeper.

--chet--

Mark Dabrowski wrote:
>
> Maybe someone can help us with this one...
>
> We're an ISP. For few months now were are trying to configure our Squid to
> cache transparently traffic from our terminal servers in order to improve
> web access of our clients. Everything is configured and works properly,
> however Squid machine is unable to handle the load and slows-down after
> running for few hours with terminal server traffic redirected into proxy
> machine. What we have to do is remove the redirection, restart squid and
> then it will work fine for a while, and slow-down again.
>
> Here is more detailed information:
>
> Squid machine:
> Pentium II 300Mhz, 384MB Ram, 24GB UltraWide SCSII HDD
> BSDI 3.1, Squid 1.1.21, I, IPFilter 3.2.7
>
> Cisco 3600 router
>
> The way we test is, we are redirecting traffic on port 80 from only 4
> terminal servers (PM3's) - one full class C - via Cisco 3600 to Squid
> machine. Then on Squid, IPFilter redirects port 80 to 8080 which is Squid's
> port.
>
> With this configuration maximum of 192 clients can request web pages
> through squid at the same time. As I said before, this works excellent at
> the beginning with a high Hit ratio, and then after few hours starts
> slowing down and finally after 8-10 hours it takes forever to request
> anything via squid.
>
> At the time this happens vmstat reports 110MB ram available. We are using
> GNU Malloc.
>
> Does anybody have any ideas????????
>

Check your disk usage. You're using *one* big disk ?
Squid 1.1 is accessing the disk sequentially, causing a bottleneck
when (# disk requests * time/request) = 1.
The disk doesn't handle as many requests as are coming in and the squid
request queue fills up, causing the slowdown.
Note that each miss is also causing disk access for writing.

Go for several disks; preferrably striped.
Or go 1.2, it supports asynchronous disk I/O.

Markus
Received on Tue May 26 1998 - 10:45:25 MDT

This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:40:20 MST