Re: Some optimisations

From: Henrik Nordstrom <hno@dont-contact.us>
Date: Tue, 19 Sep 2000 11:22:50 +0200

Andres Kroonmaa wrote:

> Thats how it should be. But when I think of how OS should do it,
> I tend to assume that OS reserves limited buffering per FD,
> overrunning which would cause fault to handle it. Or is libc
> buffering that?

Squid is not using the C library write function, it is using the direct
"UNIX" calls, so in this case it is the kernel.

The kernel usually handles this using the normal paging algorithm,
combined with some page flushing daemon to make sure data is written out
even when there is no memory pressure.

This puts the timescale on most systems in the range of 30 seconds or
so, less if there is a high demand for memory.

> Also, when serverside is relatively slow, dirty pages may get swaped
> out before filled completely. This means that appending would cause
> read+modify+write to the same disk block later.

True, but in most cases the page is still in memory, only causing
another write to the disk.

> isn't alignment to pages coming naturally from malloc lib? I thought
> malloc at least tries to return memory aligned to size of malloc.
> or perhaps we'd want to add xvalloc to util.c and use it for disk
> buffers? What about network buffers?

No, malloc only returns data aligned suitable for C data types. On most
systems this is 16 bytes, while page sizes varies from 4KB to 16KB. You
get page aligned data when you allocate memory using mmap(), or by
intentionally mallocing one page more than you need and then pointer
aritmetic to round up the start to the next page boundary and thereby
crop away the unaligned fields around the allocation.

--
Henrik Nordstrom
Squid hacker
Received on Tue Sep 19 2000 - 03:42:04 MDT

This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:12:37 MST