Re: [squid-users] Overflowing filesystems

From: Michael Puckett <Michael.Puckett@dont-contact.us>
Date: Wed, 23 Nov 2005 19:26:04 -0800

Sorry if you see this again, I got a bounced mail from squid-cache.org

Chris Robertson wrote:

>>>>-----Original Message-----
>>>>From: Michael Puckett [mailto:Michael.Puckett@Sun.COM]
>>>>Sent: Wednesday, November 23, 2005 9:25 AM
>>>>To: squid-users
>>>>Subject: [squid-users] Overflowing filesystems
>>>>
>>>>
>>>>I am running this version of squid:
>>>>
>>>>Squid Cache: Version 2.5.STABLE10
>>>>configure options: --enable-large-cache-files --disable-internal-dns
>>>>--prefix=/opt/squid --enable-async-io --with-pthreads --with-aio
>>>>--enable-icmp --enable-snmp
>>>>
>>>>
>>
>>
>>
>>I imagine you have some reason for disabling the internal DNS resolution. I'm a bit curious as to what it would be...
>>
>>
>
>
That is the way our admin set it up. This particular application is an
internal (to the company) only caching system which (relatively) few
users move (relatively) few VERY large, multi GB files from (relatively)
few origins to (relatively) few destinations. We are not caching web pages.

>>
>>
>
>
>>>>specifically enabled for large files. My cache_dir is 535GB and the
>>>>cache_dir directive looks like this:
>>>>
>>>>cache_dir aufs /export/vol01/cache 400000 64 64
>>>>cache_swap_low 97
>>>>cache_swap_high 99
>>>>
>>>>
>>>>
>>
>>
>>
>>Aside from the unusually low number of directories for the amount of data, that all seems fine.
>>
>>
>>
>
>
>>>>Squid has consumed the entire partition:
>>>>
>>>>/dev/dsk/c1t1d0s7 537G 529G 2.2G 100% /export/vol01
>>>>
>>>>Not the 400GB expected in the cache_dir directive and is now giving
>>>>write failures.
>>>>
>>>>Have I set something up wrong? Why has the cache_dir size
>>>>directive been
>>>>ignored and why isn't old cached content being released?
>>>>
>>>>
>>>>
>>
>>
>>
>>Is Squid the only thing writing to this cache_dir? Is there only one instance of Squid running? Do you see a process like unlinkd running? Are there any errors in the cache_log? What OS are you running? Assuming (judging from your email address) it's Solaris, have you had a gander at the FAQ (http://www.squid-cache.org/Doc/FAQ/FAQ-14.html#ss14.1)?
>>
>>
>
>
Good call on the OS :) Yes, we are running a multiprocessor Solaris 10
system. There are no errors on the cache log other than the filesystem
write failures as the filesystem fills up. The server is entirely
dedicated to Squid as a cache server, the filesystem entirely dedicated
to the cache.

PS output shows:
 0 S squid 20127 20121 0 40 20 ? 153 ? Jul 15
? 0:00 (unlinkd)

with no runtime thus far. Yes, we have had a gander at the FAQ and have
been running squid internally for a number of years now. This is the
first time we have filled up so large a filesystem while running the
largefile squid version however.

-mikep

Received on Wed Nov 23 2005 - 20:26:07 MST

This archive was generated by hypermail pre-2.1.9 : Thu Dec 01 2005 - 12:00:10 MST