TCP conn./Page faults ratio (was: Re: troubles with Squid)

From: Tomasz Papszun <>
Date: Tue, 28 Apr 1998 17:35:08 +0200 (MET DST)

On Tue, 28 Apr 1998, Dirk Vleugels wrote:

> Solaris will do _all_ the disk IO via pagein/pageout. So these number
> is by no means a sign that your squid binary get paged to disk very
> frequently.
> >
> > Can anybody confirm such ratio is normal for Solaris?
> Yup.
> Some hints for Solaris & Squid:
> (very worth reading!!)
> Check out the ndd values for /dev/tcp (ndd /dev/tcp \?).
> I tuned the tcp_conn_req_* queue values and the tcp_close_wait_interval
> (to get rid of the huge number of sockets in FIN_WAIT2 state).
> Also check /etc/system. ncsize & set ufs_inode could help alot on
> non stripe disks.
> Solaris 2.5.1 with all suggested patches works like a charme.

thanks to all who replied my question which sounded:
"Can anybody confirm such ratio is normal for Solaris?".
All these nice people Cc:'ied to the list so you know their names :-) .

Although I'm a little worried because majority of replies stated that this
ratio should *not* be so high even for Solaris, I tend to rely on
Dirk Vleugels' message (because it is comfortable for me thinking that
it's *not* my machine's fault ;-) ).

Provided there's no any obvious misconfiguration at my box, this must be
the way Solaris is.
Situations after 8 days since reload is (excerpts from cachemgr output):

Squid Object Cache: Version 1.1.21
        Number of TCP connections: 43102
        Number of UDP connections: 150832
        Connections per hour: 1005.2
        Storage Swap size: 917 MB
        Storage Mem size: 13799 KB
        CPU Time: 2054 seconds (1091 user 963 sys)
        CPU Usage: 0%
        Maximum Resident Size: 0 KB
        Page faults with physical i/o: 122941

So, nr of page faults is almost 3 times bigger than nr of TCP connections!
As you see, the squid is rather *not* overloaded (1000 conn/h).
The machine itself is quite workless. No users on it.

/# w | head -1
  4:51pm up 8 day(s), 1:32, 2 users, load average: 0.01, 0.02, 0.04

/# vmstat -s|tail -4
   975690 user cpu
   805546 system cpu
 66974419 idle cpu <<<<<<<<
   915468 wait cpu

/# swap -l
swapfile dev swaplo blocks free
/dev/dsk/c0t3d0s1 32,25 8 263080 120160

/# swap -s
total: 76608k bytes allocated + 9656k reserved = 86264k used, 90796k

64 MB RAM;
SunOS name 5.5.1 Generic_103640-12 sun4m sparc SUNW,SPARCstation-20

cache_mem 8
cache_swap 1000
maximum_object_size 20480

Yes, I know that maximum_object_size is bigger than cache_mem but I do
want to store big objects and I get "WARNING: Exceeded 'cache_mem' size"
only a few times a day. But maybe this is the cause of many page

I've just been getting hints mentioned (since about an hour)
but the transfer is extremely low and connection breaks again and again -
probably all "list-eners" download it right now ;-) .
But even if I get all this page, it won't be easy for me to tune the
system, I'm afraid - I've got too little Solaris experience. Dirk's tips
say _what_ to check but one must know what values _should_ be! :-) .

Thanks again and sorry for the long message.

-- Tomasz Papszun, Lodz, Poland
Received on Tue Apr 28 1998 - 08:44:50 MDT

This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:39:58 MST