data alteration efficiency

From: Robert Collins <robert.collins@dont-contact.us>
Date: Tue, 16 Jan 2001 19:24:10 +1100

Question for you hardened developers :-]

In squid we send data from the origin to the client by a series of callbacks:
the socket handler allocs / uses a spare buffer. then it calls a callback
callback(buf,size,state_data)
which may alter the data, add to it etc, and then calls
callback(buf, size, state_data) and so forth.
If a callback in the chain wants to add data, it can either alloc a larger buffer, _or_
call the callback n times,
ie for a wrapping function that doesn't change the in place data
callback(localbuf, tempsize, state_data)
callback(buf,size,state_data)

Patrick McManus's TE code uses a similar system, but what it does is cycle through a series of filter functions
filter(*inbuf,ibufl, **outbuf, *outbufl, state data)

, which _requires_ each function to alloc memory if it wants to return more data than inbufl.

what's more efficient? What I'm thinking is that it's better to call a callback twice, rather than memcpying data around within a
filter until we have a large enough buffer. It'll also make some of the encoding/decoding code easier.

What I'm basically thinking is that the filter functions call each other in a chain, rather than being called and returning. This
will allow local buffers to be used for working space, reducing malloc calls, and reduce the amount of memcpying that occurs.

I'm also hoping to make this a generic chaining mechanism, so that things like Joe's new project have a framework they can tie into.

Thoughts and opinions?

Rob
Received on Tue Jan 16 2001 - 01:12:31 MST

This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:13:18 MST