Re: Squid Startup / Config Problems

From: Brian Denehy <>
Date: Thu, 03 Apr 1997 23:42:16 +1000

| I have just installed a new Linux Box (2.0.29) and am having major problems
| getting squid to run properly on it. Squid Version 1.1.8 for i586-pc-linux-gn
| When starting up the server it never comes back from the prompt....

You are using RunCache? (good idea). Otherwise it's like inetd -d and
named -d and other well behaved daemons. Since squid is still being
actively developed, it assumes you are thinking about debugging.

| I have looked at the processes and the root daemon and 8 children processes
| are starting up.... then this little dubious one:
| /usr/local/squid/bin/ftpget -S 1638
| i have looked at a friends machine and this one doesn't start on his... but
| he has no idea of the problem.

He has the problem, not you. ftpget (and its children) do all the ftp
| That aside the server DOES work.... and running it with squid & (or even
| RunCache) fixes the problem.....
| So does anyone KNOW why this is happening ????
| i also get a few 'Page fault - could not write to physical i/: XXX' in the
| logs when it goes - so how bad is this one ? I have tried it on two boxes so
| I don't think that this is the problem.

Can't help you with this, we like FreeBSD
| Also, on caching, I have gone through some 150+ messages on the correct
| settings for squid and the connect caches.... does anyone have OPTIMAL
| settings for the following:
| Sydney based - 128K link
| 128M ram in machine (need about 32 for other stuff)
| 2G cache (pull about 100Mb new stuff a day)
| Want to be able to keep big files in there (say up to 16Mb)

Not a good idea. Probably even 4M is too big to keep, though my stats
may not look like yours. You don't have enough RAM to cope with the
peaks in VM usage. The statistics of web usage are not friendly towards
cacheing, almost everything you get is fetched once. Your cache wins
because a small number of objects are used a humungous lot (eg the
netscape now gif). Read about the Riemann zeta distribution, if you are
theoretically inclined. A better idea for big files is to investigate
the redirector and aggressively rewrite URLs pertaining to netscape4 and
msie3/4 so that they come out of a local copy. Not much else is worth
the effort of tracking in the last year or so. Usually you do this too
late, as your clients have found a couple of new mirrors which are'nt dead
from overuse.
Apart from that, once you tune your refresh lifetimes you should be
pretty happy. 100M/day is just about idling for squid. In fact, you
would probably find that things got worse if you up any of your major
parameters, particularly bandwidth.
| That's about it.... I dont think its working properly yet - keep getting
| misses to connect :-(
| Thanx for any help... to keep noise down, please send replies to
| I'll send anyone my .conf file if anyone can help!
| Greatly Appreciated.
| Kindest Regards,
| Matt Robinson BCompSci
| Internet Mania
| P.S. All this ICP stuff is really scarey :-)

Brian Denehy,			   Internet:
IT Services	 	  	   MHSnet:
Australian Defence Force Academy   UUCP:!uunet!!!bvd
Northcott Dr. Campbell ACT Australia 2600  +61 6 268 8141  +61 6 268 8150 (Fax)
Received on Thu Apr 03 1997 - 05:52:12 MST

This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:34:56 MST