Re: [squid-users] R?f. : [squid-users] cache dir limits

From: Victor Ivanov <v0rbiz@dont-contact.us>
Date: Fri, 5 Dec 2003 16:55:58 +0200

On Fri, Dec 05, 2003 at 02:41:05PM +0100, sdavy@bics.fr wrote:
>
> Hello Victor,
>
> No answers from me, just questions ;-)
> you said that your cache became too big, and I see that it is something
> like 100G large. Actually, I'm interested here to know how you see that
> your cache dir was too slow, how many users you have, and more generaly, I
> was wondering if a big cache is good for performance. Does anybody have
> some clue about that?

Hmm, I'm ashamed to admit it, but I don't have statistics for my cache.
Really, I'm just to lazy and act too late :) I'll setup some statistic-
gathering stuff soon.

The machine worked for about... three or more months when it reached diskd's
limits on that hardware. IIRC, the cache dir was about 50G then. Now it's
80G and it reached ufs's limits. I got the kernel's hard data seg limit to
1G and it was GREAT improvement. It loads 4595677 entries for 340.8 seconds
(13477.7 objects/sec). With 512M limit it tries to load them for about an
hour and then reaches the limit and commits seppuku.

Here's it some minutes after starting and rebuilding the store:

  PID USERNAME PRI NICE SIZE RES STATE TIME WCPU CPU COMMAND
  486 nobody 96 0 464M 403M select 1:16 0.10% 0.10% squid

It performs quite well now. I'm going to expand the cache_dir limit until
it starts crashing again :)

There are not many clients, ant the request rate is low, though.

If you ask if it has effect, it has. For example all online installations
including Microsoft's patches, updates, IE, etc. are cached, but you must
have big cache dir and big max object size.

Received on Fri Dec 05 2003 - 07:32:16 MST

This archive was generated by hypermail pre-2.1.9 : Thu Jan 01 2004 - 12:00:06 MST