defragmentation on cache disk

From: Umit Dericioglu <umit@dont-contact.us>
Date: Tue, 23 May 2000 11:44:49 -0300 (GMT)

Hi,

We are running squid-2.3.STABLE2 under Solaris7 (64-bit)
on a 248Mhz Sparc, 128M RAM and 3G+3G cache disk.

Everything was ok until a couple of days ago. Then,
the response time became extremely slow. "top"
showed that squid used nearly 90M of RAM and there
was nearly 90% of iowait. And the ratio of page faults
with real io to http request was more than four.

So I thought Squid was having too much paging as explained
in the FAQ. I reduced the cache_mem in the squid.conf
and recompiled Squid with dlmalloc.

Few hours later Squid died complaining that it couldn't
write on cache1. Yet, df -k showed cache1 was 71% full.
I thought cache1 partition may have run out of inode
space.

Then I ran fsck on cache1. It reported 24% fragmentation.
I did rm -r * on cache1 and watched it with df -k
on another shell. After 20 mins it only went down to
64% full despite my attempt to delete everything on it
with rm -r *.

Then I did an " echo " " > /cache1 " and started
Squid. Now everything is back to normal again. However,
I am worried that Squid may end up with that terrible iowait
and defragmentation, in time.

My question is: is this the way Squid work, i.e that
defragmentation is inevitable or am I missing something
pretty obvious here?

Some of my squid.conf settings:

cache_mem 8 M
cache_swap_low (default)
cache_swap_high (default)
max_object_size 4 M
cache_dir ufs /cache1 3000 16 256
cache_dir ufs /cache2 3000 16 256
replacement_policy GDSF
quick_abort_min 1KB
quick_abort_max 1KB
quick_abort_pct 95
memory pools off
store_avg_object_size 13 KB
store_objects_per_bucket 50
never_direct allow all

I'd appreciate any help to prevent that
problem from occurring again..

Umit
Received on Tue May 23 2000 - 02:49:35 MDT

This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:53:33 MST