fixing >4k reply header store reads

From: Adrian Chadd <adrian@dont-contact.us>
Date: Tue, 8 Jan 2008 02:17:31 +0900

I've been thinking about how to fix that particular issue whilst hacking on the
s27_adri branch. The main requirement is modifying the store to allow arbitrary
sized pages rather than just 4k pages, and then find a way to read the header
data into that first page in store memory. Once its read then the whole lot can
be parsed and fed through.

That might not be quite doable in plain Squid-2 or Squid-3, so here's my
suggestion:

* add a new TLV entry, hdr_sz, which covers the reply status + headers
* in storeClientFileRead*, do something like this:

  + read in the meta data
  + if we have no hdr_sz, hope it fits in 4k
  + if we have a hdr_sz, then
    - allocate temporary buffer at least hdr_sz bytes in size
    - copy what we have into that
    - schedule file reads until thats satisfied
    - parse
    - copy leftover into buffer to return as part of the body

Now, that should work, but its inefficient. Really guys, reading 4k is about
as fast as reading 64k from a unix filesystem disk, so we should probably
modify the store to do -that-.

I see Henrik's setting up to do this in Squid-3. I'll implement this in Squid-2
but it'll be a side-effect from all the changes I'm currently making. Once
this first lot of reference buffers and zero-copy parsers is done I'll be taking
a large knife to the disk storage code to support reading into supplied buf_t's
(or generating new ones!) and the above problem will be "fixed" as a side effect
of those changes.

2c,

Adrian
Received on Mon Jan 07 2008 - 10:08:37 MST

This archive was generated by hypermail pre-2.1.9 : Wed Jan 30 2008 - 12:00:09 MST