Re: Slow responses

From: Dancer <dancer@dont-contact.us>
Date: Tue, 27 Jan 1998 02:25:46 +1000

Markus Storm wrote:

> > Who runs at that level of requests per second (peak)? Who runs higher than th$
> > What's the highest peak request rate anyone is running?
>
> Our peak was 120 UDP / 335 TCP per sec (enough for this exclusive circle ?
> :-)).

Indeed. I prostrate myself before your expansive access logs.

> We're also sometimes seeing the mentioned behaviour. After a restart,
> everything is fine for some hours. It happens only during evening
> hours which is not peak *squid* usage time but peak *line* usage time, so
> I suspect it's somehow related to that (packet losses, retransmits, tcp
> buffers,
> whatever).

Ahhhh..I have two generic suspects. Big fat data pipes upstream of squid,
essentially choking the poor dear with traffic...and swap_high/swap_low thrashing. A
couple operating systems have minor lurking punks and ne'er-do-wells that we'll haul
in and question as becomes necessary.

Have you thought of the possibility of ganging another squid up in series? One
responsible for talking to remote sites, and the other responsible for communicating
with the users. The upstream one would only be a sort of store-and-forward box with
single_parent_bypass on the downstream box.

I _suspect_ that the arrangement might help, given that more CPU time would be
available for dealing with data, and less attention could be paid to the disks
themselves (especially if you tell it not to cache...or is that a dumb idea? It's
late. I'm not sure. Coffee might help).

Anyone tried sharing network vs cache-loadings like this? Is the idea totally
screwed, or is there actually some small grain of merit in it?

> This is with Arjan's inofficial patch to make more efficient use of the
> UN*X filesystem cache running on Solaris 2.6.

Ye gods, it _is_ late. I nearly CC'ed the damn thing to my company development list.
Bleah.Coffee time. Who needs sleep when you've got instant?

D

> >
> > D
>
> Markus
>
> > > > horrible. Pages that can be fetced directly from the webserver
> > > > may take a minute or two from the cache. There is no paging going on, and
> > > > neither raw disk I/O nor network I/O seems to be a problem. Of my 16 dnss$
> > > > processes the last few are rarely used. CPU load rarely goes above 0.5.
> > > >
> > > > The only oddity I find is this line in cache.log, appearing every few sec$
> > > > while the cache is busy.:
> > > >
> > > > 98/01/22 23:09:52| diskHandleWrite: FD 23: disk write error: (32) Broken $
> > > >
> > > > Is this related? Any idea what might be the problem?
> > >
> > > We have the same problem here, running 1.1.20 on a HP/UX 10.20 machine.
> > > At times of high load even TCP_HITs take up to 50 seconds (at least the
> > > logfile says so) to be delivered to the clients. Request peak at our
> > > site is about 20 requests per second and usually the machine has a load
> > > of 0.7 or so. Maybe it is a general problem of squid with heavy load.
> > >
> > > Rainer
> > >
>
> ------------------------------------------------------------------------
>
> Markus Storm <Markus.Storm@mediaWays.net>
> mediaWays GmbH
>
> Markus Storm
> mediaWays GmbH <Markus.Storm@mediaWays.net>
> HTML Mail
> An der Autobahn Work: ++49 +5241 80-7867
> Postfach 185 Fax: ++49 +5241 80-90561
> Gütersloh Netscape Conference Address
> 33311 Hostname or IP Address
> Germany ils.mediaWays.net
> Additional Information:
> Last Name Storm
> First Name Markus
> Version 2.1

--
Did you read the documentation AND the FAQ?
If not, I'll probably still answer your question, but my patience will
be limited, and you take the risk of sarcasm and ridicule.
Received on Mon Jan 26 1998 - 08:34:38 MST

This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:38:31 MST