Re: [squid-users] NOTICE: realloccg /cache1: file system full, again, Solaris 2.6

From: Don Lewis <Don.Lewis@dont-contact.us>
Date: Mon, 12 Oct 1998 22:33:09 -0700

On Oct 12, 6:00pm, Chris Tilbury wrote:
} Subject: Re: [squid-users] NOTICE: realloccg /cache1: file system full, ag
} On Mon, Oct 12, 1998 at 05:38:11PM +0200, Javier Puche. CSIC RedIRIS wrote:
}
} > I've been having the same problem reported by Chris Tilbury some time
} > ago, the message at the console:
} >
} > NOTICE: realloccg /cache1: file system full
} >
} > but filesystems have plenty of space.
} >
} > Anyhow the filesystems were already newfs'ed with: (Solaris 2.6, squid
} > 2-RELEASE)
} >
} > newfs -m 1 -c 16 -o time -r 7200 -C 8
} >
} > I have done it now with:
} >
} > newfs -m 1 -c 8 -i 1024 -o time -r 7200 -C 8
} >
} > but I am no really convinced that the problem is the number of
} > cylinder groups as Chris noted.
}
} I've just had my call with the answercentre closed now, and have some
} answers. The problem is caused by fragmentation. UFS is just not well
} equipped to handle the kind of filesystem workload (continuous
} creation/deletion of many small files) that an application like squid
} requires.
}
} I have a couple of potential solutions and a "tip", none of which I have
} tried yet and if anyone decides to test them first, then on your own heads
} be it. :-)
}
} First, the tip, is to set minfree to 1%. Because of the nature of the access
} to these filesystems, I've been told that you won't see any performance drop
} under 2.6 if you reduce this to 1%. This will free up a little more space,
} although nothing of the order of magnitude that is being lost.

I'd recommend against this. I've run into problems like this on news spool
when the absolute amount of free space (including minfree) went below 3%.
The problem is that while df reports space being free, all the free space
consists of fragments. Unfortunately, only the last piece of a file can
be stored in fragments, which must be contiguous, so as soon as you try
to store a file that requires one or more blocks, you get an out of space
error. If you happen to run out of blocks, decreasing minfree won't help,
since the total free space consumed by frags may well be greater than
minfree.

I'm also suspicious of the claims that you won't run into performance
problems. The free blocks are going to be widely scattered on the disk,
so any file that is created when space is tight will have its contents
widely scattered. When you try to read that file, the disk head will
have to seek all over the place to access all the data. My experience
with news spools is that once you fill up a disk like this, it takes
quite a period of time with the filesystem emptier to get the filesystem
to defragment itself enough to get the performance back.

} On to the fixes. Firstly, since the problem is a fragmentation issue,
} reducing the blocksize on the filesystem should help. The default blocksize
} is 8192 bytes, and this can be lowered to 4096. This will result in smaller
} chunks of space being initially allocated for each file and will help to
} reduce (but not remove) fragmentation.

Yes, this should definitely help. You can also go to a 512 byte frag
size if you use 4K blocks, which will reduce the average amount of space
wasted per file (1/2 the frag size -- 256 bytes per cache object vs 512 bytes).

BTW, if you set the frag size equal to the block size, you'll eliminate
the fragmentation problem, you'll be able to fill the disk completely
even with minfree set to 0, and performance will be reasonably good.
The downside is that you'll waste 2K to 4K of disk per cache object.

} The gotcha here is that if you're running on a Sun4u platform -- ie, any
} kind of modern UltraSPARC system -- then this is likely not to work. The
} minimum blocksize you can have is the same as the pagesize, which (according
} to the manual page) means that 4096 bytes is not an option although Suns
} internal answercentre documentation is, apparently, less clear on this
} issue.
}
} The second potential fix is to change the optimisation strategy of the
} filesystem from "space" to "time". This alters the allocation strategy for
} files and may result in more space being used. It may also have a
} drastically bad effect on performance of the filesystem.

I think you want to maximize for "space" instead of "time". This will
cause the filesystem to maximize the number of free blocks by packing the
frags for multiple files into the same block, which reduces the number of
blocks that are fragmented. For files that don't grow in size over time
(which is true for files in a squid cache or a news spool), the performance
penalty shouldn't be too bad.

Where "space" optimization hurts is the case where you have files that grow
over time. If the frags for several files are all packed together, you
have to keep copying the last part of the file from place to place whenever
you need to add another frag to the end of the file. The "time"
optimization avoids this need to copy the file by storing the frags at
the end of each file in a fresh block, which allows more frags to be
allocated to the file until it fills the entire block.
Received on Mon Oct 12 1998 - 23:21:23 MDT

This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:42:27 MST