Re: mmap

From: Henrik Nordstrom <hno@dont-contact.us>
Date: Sun, 31 May 1998 16:09:05 +0200

--MimeMultipartBoundary
Content-Type: text/plain; charset=iso-8859-1
Content-Transfer-Encoding: quoted-printable

Stephen R. van den Berg wrote:

> It would allow us roughly to double cache_mem, so the hot object cache
> could be a lot larger on the same system.

I said at best, which is on a system with very dumb buffer management.
Many OSes uses some kind of clever buffer management which greatly
limits the amount of memory we would gain from using mmap, so you can't
generalise on the worst case. I would estimate the average gain from
less buffers to somewhere about 10-20% of cache_mem on a VM based file
I/O systen.

> Only if we actually access the full mmapped regions and the OS
> is not clever enough about releasing pages which haven't been
> used for a while (which, Linux 2.0 currently isn't all that
> clever about, I admit; Linux 2.2 does this a whole lot better;
> I can't speak for other OSes).

I do expect some OSes to be far worse than Linux 2.0. Linux 2.0 kindly
accepts the hint that unmmap()ed pages is ok to reclaim, but I do not
expect all current OSes to even do this.

And I don't expect Linux 2.2 to automagically be as good as explicit
hints, especially not if we are maintaing our own mmap()ed hot-cache.

Neither Squid or the common OSes of today are ready to use mmap on a
full scale in Squid.

Should we end the mmap vs read/write discussion here?

What we could do is to prepare by looking throught how I/O is done in
Squid, and try to find a model that eventually could be transformed to
mmap(). First thing that pops my mind is to let the I/O routines
allocate the needed memory pages, and then pass these pages around with
some clever book-keeping (a reference count or something similar).

---
Henrik Nordstr=F6m
--MimeMultipartBoundary--
Received on Tue Jul 29 2003 - 13:15:50 MDT

This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:11:48 MST