thoughts on memory usage...

From: Michael O'Reilly <michael@dont-contact.us>
Date: Sat, 16 Aug 1997 21:01:15 +0800

Looking at a squid I'm running here I notice:

        StoreEntry 1369288 x 52 bytes = 69534 KB
        URL strings = 69498 KB

and squid saying that it's total memory usage is 144Mb. A check with
ps claims the process is more like 174Mb. (and it hasn't finished
loading objects yet. only 300,000 to go). (squid 1.NOVM.15 BTW)

Glancing around in the squid source, it looks like the difference is
possibly due to the malloc() overhead. I notice that squid seems to do
things like...

        StoreEntry * e = xcalloc(sizeof(*e));
        ....
        e->url = xstrdup(log_url);

which generates lots of small malloc() requests.

As it doesn't seem possible to have a StoreEntry with an 'char
* url', what are the disadvantages of initially doing...

        StoreEntry * e = xcalloc(sizeof(*e) + strlen(log_url) + 1);
        e->mem = (char*) (e+1);
        strcpy(e->mem, log_url);

which avoids the overhead of the 2nd malloc. (saving of between 6 and
12 megs of ram in my situation [ 4 to 8 bytes of ram per malloc in
overhead]).

The next step is to change the declaration of sentry, such that it
reads...

        {
                .....
                char key[];
        } sentry;

so you don't need the 4 bytes for 'char *key' at all. (which is a
instant 6 meg ram saving for me ). Unfortunately, this stuffs the
current hash , which makes assumptions about the element order.

Then the radical stuff.

Given that many URLs share a common prefix (i.e. 'http://' if nothing
else), wouldn't it make sense to compress the url before saving
it. Even REALLY simply dictionary compression would win fairly large
savings. I.e. staticly calculate the 254 most common prefixes, and use
the first byte of the key to index into said table. (254 cos you want
one byte for a '' prefix, and you can't use 0 if you want to keep
using string operations). Given that you'll get a 6 byte win just from
'http://', it doesn't seem like a bad idea... Particularly given that
strcmp() et al still work just fine, it shouldn't even require much
code changes (only input and display of urls should change).

The above taken together would save me around 20 - 24 megs of ram at
least.

Comments?

I guess if I get time I'll look at implementing at items 1 and
3. Item #2 looks like too much work for me. :)

Michael.
Received on Sat Aug 16 1997 - 06:11:54 MDT

This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:36:46 MST