Yet Another Note on Memory Allocation

From: Yar Tikhiy <yar@dont-contact.us>
Date: Fri, 19 Dec 1997 19:05:29 +0300 (MSK)

Hello everybody,

I did RTFS today and saw the following thing.

When squid (1.1.18) loads its swap log into memory, it writes
a new clean swap log at the same time. While the old log is read
with fgets(), the new log is written with asynchronous file_write().

The file_write() needs that the data to be written reside in a
malloc()'ed space, and the data are free()'d after write is complete.

When doing a fast foreground rebuild, the buffers do no get free()'d
fast enough, so squid makes arena look like it was shot with a machine-gun.
It malloc()'s storeEntry and url, then it malloc()'s
the buffer and repeats that as many times as the number of objects
in the swap, then the buffers get free()'d leading to very bad
fragmentation of the arena.

Maybe it would be better to write the new swap log with an ordinary write()?

SY, Yar

P.S.

diff -u store.c.orig store.c
--- store.c.orig Fri Dec 19 18:35:56 1997
+++ store.c Fri Dec 19 18:06:40 1997
@@ -1215,6 +1215,23 @@
        xfree);
 }
 
+static void
+storeFastSwapLog(const StoreEntry * e)
+{
+ LOCAL_ARRAY(char, logmsg, MAX_URL << 1);
+ /* Note this printf format appears in storeWriteCleanLog() too */
+ sprintf(logmsg, "%08x %08x %08x %08x %9d %s\n",
+ (int) e->swap_file_number,
+ (int) e->timestamp,
+ (int) e->expires,
+ (int) e->lastmod,
+ e->object_len,
+ e->url);
+ write(swaplog_fd,
+ logmsg,
+ strlen(logmsg));
+}
+
 static void
 storeSwapOutHandle(int fd, int flag, StoreEntry * e)
 {
@@ -1488,7 +1505,10 @@
            expires,
            timestamp,
            lastmod);
- storeSwapLog(e);
+ if (opt_foreground_rebuild)
+ storeFastSwapLog(e);
+ else
+ storeSwapLog(e);
        HTTPCacheInfo->proto_newobject(HTTPCacheInfo,
            urlParseProtocol(url),
            (int) size,
Received on Fri Dec 19 1997 - 09:23:00 MST

This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:37:59 MST