Re: File descriptor leaks; misc queries

From: WWW server manager <webadm@dont-contact.us>
Date: Thu, 3 Jul 1997 00:26:50 +0100 (BST)

[Thanks also to James R. Grinter and Don Lewis for their comments on these
questions - though so far, I don't see any comments or solutions for my
more significant problem, the "file descriptor leak" - any hints welcome!]

[Also, this is a couple of hundred lines - sorry, but the comments prompted
further thoughts on various issues!]

Duane Wessels wrote:
> webadm@info.cam.ac.uk writes:
>
> >(1) The cachemgr.cgi "Cache Information" page for 1.NOVM.11 reports
> >...
> >The penultimate line looks very suspicious - what does it really mean when
> >it says it has 1 file descriptor available?
>
> The '1' comes from 'fdstat_are_n_free()' which I recently changed.
> Now it returns 1 or 0 instead of the number of descriptors available.
> I didn't realize it was being used for output like that.
>
> Also I hadn't noticed the duplicated 'Number of file descriptors in
> use'.

I didn't notice the duplication either. Oh well! Thanks for explaining the
unexpected "1".

> >(2) Again in cachemgr.cgi, is it standard/unavoidable (at least on Solaris
> >2) that the "Resource usage section for squid" of the Cache Info page lists
> >all zeroes for the values, with zeroes also in cache.log when it writes
> >details as squid terminates? E.g.
> >...
> Might be related to this:
>
> #ifdef _SQUID_SOLARIS_
> /* Solaris 2.5 has getrusage() permission bug -- Arjan de Vet */
> enter_suid();
> #endif
> getrusage(RUSAGE_SELF, &rusage);
> #ifdef _SQUID_SOLARIS_
> leave_suid();
> #endif
>
> I'd suggest removing the enter/leave_suid calls and see if that
> works.

The situation actually appears to be the other way around, as implied by
James Grinter's comments. The Makefiles built by configure do not set
_SQUID_SOLARIS_ by default (maybe because I'm using Sun's cc rather than
gcc? though that sounds implausible). Rebuilding with it set explicitly
mostly fixes the problem -

Resource usage for squid:
        CPU Time: 116 seconds (32 user 84 sys)
        CPU Usage: 6%
        Maximum Resident Size: 0 KB
        Page faults with physical i/o: 30180

except the zero for Maximum Resident Size. But the Solaris 2.5.1 man
page for getrusage says in the NOTES section "Only the timeval fields of
struct rusage are supported in this implementation." It documents all the
fields and many seem to have plausible values, but apparently the ru_maxrss
field isn't set (or else getpagesize() returns zero :-). Better than before,
anyway!

[Related point: I edited src/Makefile.in to set _SQUID_SOLARIS_, then
ran configure and make. make just relinked squid, until I did make clean and
repeated the make. Is it practical for the generated Makefiles to get this
right, and recognise when reconfiguration implies everything (or just some
things, possibly) should be recompiled?]

> >(3) I suspect this one is some sort of DNS lookup oddity.
> >
> >(a) In the cachemgr.cgi "Cache Client List", it often lists IP addresses in
> >some of the Name: lines, even though the systems concerned have perfectly
> >good DNS registrations and the names should be available.
> >
> >(b) In access.log, I get a steady stream of UDP_DENIED entries for requests
> >from my one ICP neighbour; most requests are OK, but around 3% are logged
> >with the IP address in the access log instead of the hostname, and are
> >rejected instead of being processed normally. (ICP queries are only allowed
> >from that server, configured by hostname.)
> >
> >I *think* I saw indications that some parent caches (running Squid, details
> >unknown) similarly rejected a proportion of our ICP queries, presumably for
> >the same reason.
>
> As I'm sure you realize, performing DNS lookups takes time and would
> block the Squid process (hence the dnsservers). So the FQDN cache
> exists to cache these reverse lookups just like the IP cache exists
> to cache the forward lookups.

Fine so far...

> Because you've given a hostname in the access lists, Squid must check
> the FQDN cache for every ICP query received.
>
> But the FQDN cache entries can timeout, or get purged to make room
> for other entries. When and ICP query comes in, and there is no
> FQDN cache entry for the address, what should Squid do?

What it does, unless that situation can be avoided (at least in more cases
than at present). Unfortunately. [Comments below which are emphasising the
importance of names are written in the knowledge that there's a conflict
between what is good for many purposes (names) and what is efficient in
reality (addresses). Maybe there's some scope for "an improved compromises".]

> Waiting for the reverse lookup to happen is really out of the question.
> The ICP reply needs to be sent immediately. Any delay is unacceptable.
> Since we can't immediately determine the hostname from the address,
> we have to choose to either allow or deny the request. We opt to
> be paranoid and deny it.

As James Grinter mentioned, this sort of oddity can also happen when systems
have multiple interfaces and e.g. replies all come from one interface even
though requests went independently to both (replies from "wrong" interface
for request ignored), or if the "outbound" interface isn't in the DNS. I've
seen this where a second interface was used as a "placeholder" for another
name (nominally a different system, just not at that time...) so requests
were sent to "B" but as seems common with multiple interfaces the reply was
sent via another interface, "A" according to reverse lookup. Not what was
expected by squid. I think the explanation in the present case, though, is
almost certainly DNS lookups taking longer than Squid is prepared to wait.

Deny is certainly the only safe option. However, as Don Lewis has suggested
in his reply, squid could take anticipatory action to refresh recently
referenced FQDN cache entries that are about to expire. That might not be
desirable in all cases (depending on what proportion of such cases would be
a waste of time, not referenced again before being flushed from the cache),
but how about a "keep current" flag for FQDNs that are referenced (by
exact name, not e.g. domain implying an arbitrary number of systems) in
configuration directives and hence are predictably required frequently?

They could be refreshed in advance, or alternatively refreshed when they
expire *but* be handled slightly specially at that point if the lookup takes
too long - using the old, cached value if a replacement isn't yet available.
Keeping the entries refreshed in anticipation of being needed seems like the
best strategy, though, and with the "keep current" flag also implying "keep
in cache" so they wouldn't be displaced by other entries (implying perhaps
that the cache should be sized to allow for these fixed entries in addition
to the number of entries it would otherwise allow for).

I don't like the idea of using addresses explicitly in the configuration
file, since it's very likely they will change over time (possibly
transiently, possibly long-term) and tracking such changes could waste time
and cause disruption to service (though possibly hidden disruption and
inefficiency, rather than anything immediately obvious - e.g. if neighbours
were refused access but coped by using other routes, unnecessarily).

I have to configure client access controls by domain name, anyway, since
there are several class B networks, plus a growing accumulation of class C
networks in special situations, with multiple indendent allocation
authorities, and some systems on those networks in other domains that
shouldn't be allowed access; reality is not simple. The requirement is for
all systems in a particular domain to have access, regardless of address...

Also ... with the cachemgr.cgi Cache Client List, even displaying the list
frequently does not seem to show names for all the systems listed ... is
that a conscious choice, not to look up names for those addresses which
don't already have a name cached? I can see the argument on grounds of
efficiency, but I know systems by their names, not their addresses, and for
that display to be useful, I really need it to show names consistently.

That prompts some further thoughts, though ... Does the Cache Client page
show all clients that have accessed Squid since it was last restarted? If
so, with around 15K potential clients the list could grow to be unmanageably
long, and translating all the addresses (that weren't cached due to recent
accesses) could be *very* slow - clearly not good...

On the other hand, if Squid hadn't been restarted for a significant
period, the addresses alone could be misleading (if, for example, I noticed
that some system appeared to be having problems, or to be overloading the
cache - someone running a badly-behaved indexing robot, whatever).

The DNS mappings are dynamic, and you really need the pair (IP
address,hostname) stored at the time of the access, as the identifying
information for the counters (not the address alone). Otherwise, the address
could have been reallocated (so you'd be chasing a problem with the wrong
system) or the name and associated services could have moved to a different
system, leaving the address behind... The problem with actually doing that
is that it could mean the difference between using 60KB to store the
addresses (if 15K systems had accessed he server) and maybe a megabyte to
store the associated names. A report page covering 15K clients would be
unusable anyway...

Which prompts another thought - when (if ever) are the client and server
lists either emptied, or systems dropped from them if not referenced
recently? The log files are the obvious source of long-term usage
information, so e.g. keeping a last referenced timestamp for each entry in
the client and server lists, and dropping entries not seen for some time (a
day? a week? configurable?

[Probably configured separately for clients and servers: I can imagine
wanting to drop clients not seen recently while keeping my handful of
neighbour and parent servers listed permanently.]

I suppose the counts from dropped entries could be added in to a "catchall"
"Other clients" or "Other servers" entry, so that overall the pages would
still reflect totals since the server was started (or the counts were reset,
if that ever happens e.g. with logfile rotation...?), even though some
counts for the listed systems might have been dumped previously into the
Other totals along with systems that haven't been seen again recently.

Finally, a different point entirely...

I tried applying the connection-retrying patch from
http://squid.nlanr.net/Squid/1.1/1.1.11/connect-retry.patch which is
linked-to from both the 1.1.11 and 1.NOVM.11 pages, but it wouldn't apply to
1.NOVM.11 - lots of slight offsets (but probably OK), several outright
failures for particular bits of the patch. In spite of the link from the
1.NOVM.11 page, it looks like the patch needs updating to work with NOVM.

                                John Line

-- 
University of Cambridge WWW manager account (usually John Line)
Send general WWW-related enquiries to webmaster@ucs.cam.ac.uk
Received on Wed Jul 02 1997 - 16:29:36 MDT

This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:35:39 MST