RE: comm_select.c bug

From: Andres Kroonmaa <andre@dont-contact.us>
Date: Mon, 11 Sep 2000 12:42:48 +0200

On 11 Sep 2000, at 8:51, Chemolli Francesco (USI) <ChemolliF@GruppoCredit.it> wrote:

> > > NAME: incoming_dns_average
> > > +TYPE: int
> > > +DEFAULT: 4
> > > +LOC: Config.comm_incoming.dns_average
> > > +DOC_NONE
> > > +
> > > ...
> > > +NAME: min_dns_poll_cnt
> > > +TYPE: int
> > > +DEFAULT: 8
> > > +LOC: Config.comm_incoming.dns_min_poll
> > > +DOC_NONE
> > > +
> > >
> > > Ehm... what are those supposed to do? When do they become relevant?
> >
> > from this data cf_parser.c is generated, which accepts and
> > parses squid.conf
> > These lines do nothing else but allows cf_parse.c to set
> > initial defaults,
> > which otherwise are left at 0, which is not what we want.
> > (make clean ; make) is needed, not sure about configure.
>
> Sure, I understand how those lines are used. My question was:
> "what is a good value for those settings"? When do they become
> relevant?

 oh, sorry. I think any value > 0-1 is good enough.
 The algoritm used is commented in comm_select.c.

 The idea behind this algoritm is to delay polling of incoming
 sockets, but so that when we next time poll them, we receive
 about incoming_*_average new incoming events. If we get more
 new events than we expected then we decide that we are falling
 behind and input queues are growing too large. Then the poll
 frequency is increased for incoming sockets.
 *_average is "normal" amount of incoming events per service run,
 being some kind of balance between too excessive polling (thus
 waste of CPU), and too slow reaction to incoming events.
 *_min_poll is determining maximum frequency of poll, by making
 sure that at least this many normal sockets are serviced before
 considering polling incoming sockets again.

 Normally, incoming_*_intervals tunes to max value, meaning that
 squid is handling requests fast enough. These tunables become
 relevant only when it seems that cache is overloaded and you need
 to give preference to specific incoming sockets. Be it ICP, DNS,
 or new HTTP requests.

 It should make difference whether you are using async-io or not,
 because without async-io disk events can take quite some time,
 and if you have to handle very many files before repolling
 incoming sockets, you could loose ICP, DNS replies or overflow
 incoming HTTP TCP queues. Then you'd want to tune repolling
 frequencies to be more aggressive.
 But if you use async-io, then most delay can come only from
 TCP socket servicing, and as its pretty fast, you'd want to
 have incoming sockets polled less frequently.
 This algoritm is there to allow squid to selftune.

 default values of 0 simply disable selftuning and locks squid
 at fastest polling rates possible.

 I think you should bother with these tunables only when you
 see real trouble. And then you are pretty much on your own.
 To tune them right is indeed quite a voodoo and mostly matter
 of your own preference.

------------------------------------
 Andres Kroonmaa <andre@online.ee>
 Delfi Online
 Tel: 6501 731, Fax: 6501 708
 Pärnu mnt. 158, Tallinn,
 11317 Estonia
Received on Mon Sep 11 2000 - 04:45:25 MDT

This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:12:37 MST