RE: [squid-users] Recurrent crashes and warnings: "Your cache is running out of filedescriptors"

From: Jenny Lee <bodycare_5_at_live.com>
Date: Thu, 13 Oct 2011 11:22:26 +0000

> > Perhaps you are running out of inodes?
> >
> > "df -i" should give you what you are looking for.
>
>
> Well done. df reports indeed that I am out of inodes (100% used).
> I've seen that a Sarg daily report contains about 170'000 files. I am
> starting tar.gzipping them.
>
> Thank you very much Jenny.
>
>
> Leonardo
 

Glad this solved. Actually you could increase inode max (i think it was double/triple of /proc/sys/fs/file-max setting).
 
However, 170,000 files on a directory on a mechanical drive will make things awfully slow.
 
Also, ext4 is preferable since deletes are done at the background. Our tests on an SSD with ext3 took 9 mins to delete 1 million files. It was about 7 secs on ext4.
 
Whenever we need to deal with high number of files (sometimes in the tune of 100 Million), we move them to an SSD with ext4 and perform operations there. And yes, that moving part... is very painful also unless the files were already tarred :)
 
Let me give you an example. Process 1 Million files on a single directory (read, write, split to directories, archive):
 
HDD: 6 days
SSD: 4 hours
 
Jenny
                                                
Received on Thu Oct 13 2011 - 11:22:33 MDT

This archive was generated by hypermail 2.2.0 : Thu Oct 13 2011 - 12:00:04 MDT