Re: [squid-users] Mem Cache flush

From: pokeman <asifbakali@dont-contact.us>
Date: Wed, 13 Feb 2008 11:50:13 -0800 (PST)

i seen today a lots of cache hit results i setup my cache drivers 7000 mb now
the limit is full what now happen if another object want to be cache how to
squid expire not use objects and old objects
i am not set any low high water mark i already post my conf also i am using
ZPH here is my squid HIT info

Squid Object Cache: Version 2.6.STABLE18
Start Time: Tue, 12 Feb 2008 16:56:17 GMT
Current Time: Wed, 13 Feb 2008 19:46:18 GMT
Connection information for squid:
        Number of clients accessing cache: 0
        Number of HTTP requests received: 10290978
        Number of ICP messages received: 0
        Number of ICP messages sent: 0
        Number of queued ICP replies: 0
        Request failure ratio: 0.00
        Average HTTP requests per minute since start: 6391.9
        Average ICP messages per minute since start: 0.0
        Select loop called: 82844943 times, 1.166 ms avg
Cache information for squid:
        Request Hit Ratios: 5min: 44.6%, 60min: 45.0%
        Byte Hit Ratios: 5min: 19.8%, 60min: 23.5%
        Request Memory Hit Ratios: 5min: 32.9%, 60min: 31.9%
        Request Disk Hit Ratios: 5min: 31.6%, 60min: 34.3%
        Storage Swap size: 45131052 KB
        Storage Mem size: 524224 KB
        Mean Object Size: 21.58 KB
        Requests given to unlinkd: 339
Median Service Times (seconds) 5 min 60 min:
        HTTP Requests (All): 0.24524 0.25890
        Cache Misses: 0.44492 0.46965
        Cache Hits: 0.00286 0.00379
        Near Hits: 0.24524 0.25890
        Not-Modified Replies: 0.00091 0.00091
        DNS Lookups: 0.00464 0.00372
        ICP Queries: 0.00000 0.00000
Resource usage for squid:
        UP Time: 96600.326 seconds
        CPU Time: 21118.258 seconds
        CPU Usage: 21.86%
        CPU Usage, 5 minute avg: 35.46%
        CPU Usage, 60 minute avg: 37.29%
        Process Data Segment Size via sbrk(): 914716 KB
        Maximum Resident Size: 0 KB
        Page faults with physical i/o: 0
Memory usage for squid via mallinfo():
        Total space in arena: 914716 KB
        Ordinary blocks: 900701 KB 856201 blks
        Small blocks: 0 KB 0 blks
        Holding blocks: 20680 KB 13 blks
        Free Small blocks: 0 KB
        Free Ordinary blocks: 14014 KB
        Total in use: 921381 KB 99%
        Total free: 14014 KB 1%
        Total size: 935396 KB
Memory accounted for:
        Total accounted: 763842 KB
        memPoolAlloc calls: 1198206486
        memPoolFree calls: 1191587209
File descriptor usage for squid:
        Maximum number of file descriptors: 32768
        Largest file desc currently in use: 1554
        Number of file desc currently in use: 1342
        Files queued for open: 0
        Available number of file descriptors: 31426
        Reserved number of file descriptors: 100
        Store Disk files open: 0
        IO loop method: epoll
Internal Data Structures:
        2116617 StoreEntries
        111018 StoreEntries with MemObjects
        110528 Hot Object Cache Items
        2091341 on-disk objects

Adrian Chadd wrote:
>
> On Wed, Feb 13, 2008, pokeman wrote:
>>
>> thanks i just switch my cache drives to aufs can you explane me in detail
>> what other changes i made in my squid.conf for high cache resuls we have
>> almost 45 mb link for proxy services 30 mb. can i add more harddrive to
>> caching or just tweak to my squid and linux kernal. ! remember we are
>> using
>> RHEL ES 4 . i know bsd given high availablity but we can'nt use
>
> You can just convert diskd to aufs, yes, as long as its compiled in.
> It just requires a restart to be safe.
>
> You then need to grab some logfile statistics stuff from the internet
> and see what content is being cached and what isn't being cached.
> Then you can decide what to look to cache. :)
>
>
>
> ADrian
>
>>
>> Adrian Chadd wrote:
>> >
>> > G'day,
>> >
>> > A few notes.
>> >
>> > * Diskd isn't stable, and won't be until I commit my next set of
>> patches
>> > to 2.7 and 3.0; use aufs for now.
>> >
>> > * Caching windows updates will be possible in Squid-2.7. It'll require
>> > some
>> > rules and a custom rewrite helper.
>> >
>> > * 3.0 isn't yet as fast as 2.6 or 2.7.
>> >
>> >
>> > Adrian
>> >
>> > On Tue, Feb 12, 2008, pokeman wrote:
>> >>
>> >> Well I experience with squid cache not good works on heavy load I 4
>> core
>> >> processor machine with 7 scsi drives 4 gb ram average work load in
>> peak
>> >> hours 3000 users 30 mb bandwidth on that machine using RHEL ES 4. I
>> >> search
>> >> many articles on high cache performance specially windows update these
>> >> days
>> >> very headache to save PSF extension i heard In squid release 3.0 for
>> >> better
>> >> performance but why squid developers could???nt find solution for
>> cache
>> >> windows update in 2.6 please suggest me if I am doing something wrong
>> in
>> >> my
>> >> squid.conf
>> >>
>> >>
>> >> http_port 3128 transparent
>> >> range_offset_limit 0 KB
>> >> cache_mem 512 MB
>> >> pipeline_prefetch on
>> >> shutdown_lifetime 2 seconds
>> >> coredump_dir /var/log/squid
>> >> ignore_unknown_nameservers on
>> >> acl all src 0.0.0.0/0.0.0.0
>> >> acl ourusers src 192.168.100.0/24
>> >> hierarchy_stoplist cgi-bin ?
>> >> maximum_object_size 16 MB
>> >> minimum_object_size 0 KB
>> >> maximum_object_size_in_memory 64 KB
>> >> cache_replacement_policy heap LFUDA
>> >> memory_replacement_policy heap GDSF
>> >> cache_dir diskd /cache1 7000 16 256
>> >> cache_dir diskd /cache2 7000 16 256
>> >> cache_dir diskd /cache3 7000 16 256
>> >> cache_dir diskd /cache4 7000 16 256
>> >> cache_dir diskd /cache5 7000 16 256
>> >> cache_dir diskd /cache6 7000 16 256
>> >> cache_dir diskd /cache7 7000 16 256
>> >> cache_access_log none
>> >> cache_log /var/log/squid/cache.log
>> >> cache_store_log none
>> >> dns_nameservers 127.0.0.1
>> >> refresh_pattern windowsupdate.com/.*\.(cab|exe|dll) 43200 100%
>> >> 43200
>> >> refresh_pattern download.microsoft.com/.*\.(cab|exe|dll) 43200 100%
>> >> 43200
>> >> refresh_pattern au.download.windowsupdate.com/.*\.(cab|exe|psf) 43200
>> >> 100%
>> >> 43200
>> >> refresh_pattern ^ftp: 1440 20% 10080
>> >> refresh_pattern ^gopher: 1440 0% 1440
>> >> refresh_pattern cgi-bin 0 0% 0
>> >> refresh_pattern \? 0 0% 4320
>> >> refresh_pattern . 0 20% 4320
>> >> negative_ttl 1 minutes
>> >> positive_dns_ttl 24 hours
>> >> negative_dns_ttl 1 minutes
>> >> acl manager proto cache_object
>> >> acl localhost src 127.0.0.1/255.255.255.255
>> >> acl to_localhost dst 127.0.0.0/8
>> >> acl SSL_ports port 443 563
>> >> acl Safe_ports port 1195 1107 1174 1212 1000
>> >> acl Safe_ports port 80 # http
>> >> acl Safe_ports port 82 # http
>> >> acl Safe_ports port 81 # http
>> >> acl Safe_ports port 21 # ftp
>> >> acl Safe_ports port 443 563 # https, snews
>> >> acl Safe_ports port 70 # gopher
>> >> acl Safe_ports port 210 # wais
>> >> acl Safe_ports port 1025-65535 # unregistered ports
>> >> acl Safe_ports port 280 # http-mgmt
>> >> acl Safe_ports port 488 # gss-http
>> >> acl Safe_ports port 591 # filemaker
>> >> acl Safe_ports port 777 # multiling http
>> >> acl CONNECT method CONNECT
>> >> http_access allow manager localhost
>> >> http_access deny manager
>> >> http_access deny !Safe_ports
>> >> http_access deny CONNECT !SSL_ports
>> >> http_access allow ourusers
>> >> http_access deny all
>> >> http_reply_access allow all
>> >> cache allow all
>> >> icp_access allow ourusers
>> >> icp_access deny all
>> >> cache_mgr info@fariya.com
>> >> visible_hostname CE-Fariya
>> >> dns_testnames localhost
>> >> reload_into_ims on
>> >> quick_abort_min 0 KB
>> >> quick_abort_max 0 KB
>> >> log_fqdn off
>> >> half_closed_clients off
>> >> client_db off
>> >> ipcache_size 16384
>> >> ipcache_low 90
>> >> ipcache_high 95
>> >> fqdncache_size 8129
>> >> log_icp_queries off
>> >> strip_query_terms off
>> >> store_dir_select_algorithm round-robin
>> >> client_persistent_connections off
>> >> server_persistent_connections on
>> >> persistent_request_timeout 1 minute
>> >> client_lifetime 60 minutes
>> >> pconn_timeout 10 seconds
>> >>
>> >>
>> >>
>> >> Adrian Chadd wrote:
>> >> >
>> >> > On Thu, Jan 31, 2008, Chris Woodfield wrote:
>> >> >> Interesting. What sort of size threshold do you see where
>> performance
>> >> >> begins to drop off? Is it just a matter of larger objects reducing
>> >> >> hitrate (due to few objects being cacheable in memory) or a
>> bottleneck
>> >> >> in squid itself that causes issues?
>> >> >
>> >> > Its a bottleneck in the Squid code which makes accessing the n'th 4k
>> >> > chunk in memory take O(N) time.
>> >> >
>> >> > Its one of the things I'd like to fix after Squid-2.7 is released.
>> >> >
>> >> >
>> >> >
>> >> > Adrian
>> >> >
>> >> >
>> >> >
>> >>
>> >> --
>> >> View this message in context:
>> >> http://www.nabble.com/Mem-Cache-flush-tp14951540p15449954.html
>> >> Sent from the Squid - Users mailing list archive at Nabble.com.
>> >
>> > --
>> > - Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial Squid
>> > Support -
>> > - $25/pm entry-level VPSes w/ capped bandwidth charges available in WA
>> -
>> >
>> >
>>
>> --
>> View this message in context:
>> http://www.nabble.com/Mem-Cache-flush-tp14951540p15452542.html
>> Sent from the Squid - Users mailing list archive at Nabble.com.
>
> --
> - Xenion - http://www.xenion.com.au/ - VPS Hosting - Commercial Squid
> Support -
> - $25/pm entry-level VPSes w/ capped bandwidth charges available in WA -
>
>

-- 
View this message in context: http://www.nabble.com/Mem-Cache-flush-tp14951540p15465774.html
Sent from the Squid - Users mailing list archive at Nabble.com.
Received on Wed Feb 13 2008 - 12:50:20 MST

This archive was generated by hypermail pre-2.1.9 : Sat Mar 01 2008 - 12:00:05 MST