Re: [Server-devel] Squid tuning recommendations for OLPC School Server tuning...

From: Adrian Chadd <adrian_at_squid-cache.org>
Date: Tue, 23 Sep 2008 11:09:45 +0800

G'day,

I've looked into this a bit (and have a couple of OLPC laptops to do
testing with) and .. well, its going to take a bit of effort to make
squid "fit".

There's no "hard limit" for squid and squid (any version) handles
memory allocation failures very very poorly (read: crashes.)

You can limit the amount of cache_mem which limits the memory cache
size; you could probably modify the squid codebase to start purging
objects at a certain object count rather than based on the disk+memory
storage size. That wouldn't be difficult.

The big problem: you won't get Squid down to 24meg of RAM with the
current tuning parameters. Well, I couldn't; and I'm playing around
with Squid on OLPC-like hardware (SBC with 500mhz geode, 256/512mb
RAM.) Its something which will require quite a bit of development to
"slim" some of the internals down to scale better with restricted
memory footprints. Its on my personal TODO list (as it mostly is in
line with a bunch of performance work I'm slowly working towards) but
as the bulk of that is happening in my spare time, I do not have a
fixed timeframe at the moment.

Adrian

2008/9/23 Martin Langhoff <martin.langhoff_at_gmail.com>:
> Hi!
>
> I am working on the School Server (aka XS: a Fedora 9 spin, tailored
> to run on fairly limited hw), I'm preparing the configuration settings
> for it. It's a somewhat new area for me -- I've setup Squid before on
> mid-range hardware... but this is... different.
>
> So I'm interested in understanding more aobut the variables affecting
> memory footprint and how I can set a _hard limit_ on the wired memory
> that squid allocates.
>
> In brief:
>
> - The workload is relatively "light" - 3K clients is the upper bound.
>
> - The XS will (in some locations) be hooked to *very* unreliable
> power... uncontrolled shutdowns are the norm. Is this ever a problem with Squid?
>
> - After a bad shutdown, graceful recovery is the most important
> aspect. If a few cached items are lost, we can cope...
>
> - The XS hardware runs many services (mostly webbased), so Squid gets
> only a limited slice of memory. To make matters worse, I *really*
> don't want the core working set (Squid, Pg, Apache/PHP) to get paged
> out. So I am interested in pegging the max memory Squid will take to itself.
>
> - The XS hw is varied. In small schools it may have 256MB RAM (likely
> to be running on XO hardware + usb-connected ext hard-drive).
> Medium-to-large schools will have the recommended 1GB RAM and a cheap
> SATA disk. A few very large schools will be graced with more RAM (2 or
> 4GB).
>
> .. so RAM allocation for Squid will prob range between 24MB at the
> lower-end and 96MB at the 1GB "recommended" RAM.
>
> My main question is: how would you tune Squid 3 so that
>
> - it does not allocate directly more than 24MB / 96MB? (Assume that
> the linux kernel will be smart about mmapped stuff, and aggressive
> about caching -- I am talking about the memory Squid will claim to
> itself).
>
> - still gives us good thoughput? :-)
>
>
>
> So far Google has turned up very little info, and it seems to be
> rather old. What I've found can be summarised as follows:
>
> - The index is malloc'd, so the number of entries in the index will
> be the dominant concern WRT memory footprint.
>
> - Each index entry takes between 56 bytes and 88 bytes, plus
> additional, unspecificed overhead. Is 1KB per entry a reasonable
> conservative estimate?
>
> - Discussions about compressing or hashing the URL in the index are
> recurrent - is the uncompressed URL there? That means up to 4KB per
> index entry?
>
> - The index does nto seem to be mmappable or otherwise
>
> We can rely on the (modern) linux kernel doing a fantastic job at
> caching disk IO and shedding those cached entries when under memory
> pressure, so I am likely to set Squid's own cache to something really
> small. Everything I read points to the index being my main concern -
> is there a way to limit (a) the total memory the index is allowed to
> take or (b) the number of index entries allowed?
>
> Does the above make sense in general? Or am I barking up the wrong tree?
>
>
> cheers,
>
>
>
> martin
> --
> martin.langhoff_at_gmail.com
> martin_at_laptop.org -- School Server Architect
> - ask interesting questions
> - don't get distracted with shiny stuff - working code first
> - http://wiki.laptop.org/go/User:Martinlanghoff
> _______________________________________________
> Server-devel mailing list
> Server-devel_at_lists.laptop.org
> http://lists.laptop.org/listinfo/server-devel
>
>
Received on Tue Sep 23 2008 - 03:09:48 MDT

This archive was generated by hypermail 2.2.0 : Tue Sep 23 2008 - 12:00:04 MDT