Re: [squid-users] squid Process Size

From: khiz code <khizcode@dont-contact.us>
Date: Mon, 10 Sep 2001 23:16:54 -0700 (PDT)

hi
yes ive been following joe 's golden rules
so for abt 12 Gb of cache dir i hv given 100 MB cache_mem which is abt
umm 20 MB short of 10*12 = 120 MB
i was afraid that giving 120 MB from my 512 MB ram wud just in case
lead to swapping
right now the cache_dir is abt 26 %full so quite a long way to go
squid is already abt 180 MB
Cache information for squid:
        Request Hit Ratios: 5min: 24.2%, 60min: 28.5%
        Byte Hit Ratios: 5min: 24.4%, 60min: 24.1%
        Request Memory Hit Ratios: 5min: 17.2%, 60min: 16.6%
        Request Disk Hit Ratios: 5min: 40.5%, 60min: 43.3%
        Storage Swap size: 2877784 KB
        Storage Mem size: 102352 KB
        Mean Object Size: 11.66 KB
        Requests given to unlinkd: 0
Median Service Times (seconds) 5 min 60 min:
        HTTP Requests (All): 0.80651 0.49576
        Cache Misses: 0.94847 0.94847
        Cache Hits: 0.01955 0.01745
        Near Hits: 0.94847 0.89858
        Not-Modified Replies: 0.01035 0.00919
        DNS Lookups: 0.04639 0.04433
        ICP Queries: 0.00000 0.00000
Resource usage for squid:
        UP Time: 164894.814 seconds
        CPU Time: 7193.890 seconds
        CPU Usage: 4.36%
        CPU Usage, 5 minute avg: 13.19%
        CPU Usage, 60 minute avg: 12.26%
        Maximum Resident Size: 0 KB
        Page faults with physical i/o: 1752
Memory usage for squid via mallinfo():
        Total space in arena: 178663 KB
        Ordinary blocks: 178636 KB 243 blks
        Small blocks: 0 KB 0 blks
        Holding blocks: 3912 KB 4 blks
        Free Small blocks: 0 KB
        Free Ordinary blocks: 26 KB
        Total in use: 182548 KB 102%
        Total free: 26 KB 0%
Memory accounted for:
        Total accounted: 154700 KB
        memPoolAlloc calls: 119585429
        memPoolFree calls: 118367031
Internal Data Structures:
        251563 StoreEntries
         18077 StoreEntries with MemObjects
         13948 Hot Object Cache Items
        246824 on-disk objects

 vmstat o/p
  procs memory swap io system cpu
r b w swpd free buff cache si so bi bo in cs us sy id
0 0 0 0 2192 178724 63320 0 0 0 3 61 48 1 1 35
0 0 0 0 2144 178764 63328 0 0 0 0 350 292 1 5 94
0 0 0 0 2064 178848 63328 0 0 0 42 416 357 4 6 90

are these healthy statistics ....???? what abt my response times..
i hope and wish this box to go upto 100req/sec plus in the next few
days ... the support from the list has just been too good!!!
right now i ve not yet tuned squid .conf refresh patterns ..using the
default
this box is connected via a 2 MB pipe to my ISP
pls do get back
thanks a lot
khizcode

--- Mike Diggins <diggins@mcmail.cis.mcmaster.ca> wrote:
>
> I believe the general rule is 10 MB of RAM per 1 GB of disk cache
> plus
> your cache_mem setting (approx). In your case, squid should reach a
> maximum size of about 220 MB assuming a full cache. How full is your
> cache?
>
> -Mike
>
> On Mon, 10 Sep 2001, khiz code wrote:
>
> > hi all
> > me again .. sorry
> > i ve been running squid fo rthe last 3 days
> > http req/sec = 20
> >
> > the squid process is abt 180 MB
> > free -m shows me
> > total used free shared buffers cached
> > Mem: 505 502 2 11 168 66
> > -/+ buffers/cache: 267 237
> > Swap: 4000 0 4000
> >
> > i got cache_dir of 12 GB distributed across 4 drives
> > as 3 GB each
> > cache_mem 100 MB
> > compaq prolinea 512 MB ram
> >
> > is this process size okay ...
> > the shared buffer column keeps on increasing
> > shud i expect swapping in the next few days ???
> > any preventive measures
> > coz going by the past posts on the list cache_mem of
> > 100 mB is abt okay for 512 MB ram..
> > i only got squid as the main process on th emachine
> > pls do reply back
> > rgds
> > khizcode
>

__________________________________________________
Do You Yahoo!?
Get email alerts & NEW webcam video instant messaging with Yahoo! Messenger
http://im.yahoo.com
Received on Tue Sep 11 2001 - 00:16:59 MDT

This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 17:02:07 MST