Re: transfer-encoding

From: Robert Collins <robert.collins@dont-contact.us>
Date: Fri, 5 Jan 2001 11:31:54 +1100

----- Original Message -----
From: "Henrik Nordstrom" <hno@hem.passagen.se>
To: "Robert Collins" <robert.collins@itdomain.com.au>
Cc: <squid-dev@squid-cache.org>
Sent: Friday, January 05, 2001 8:36 AM
Subject: Re: transfer-encoding

> I don't think he is on the list, so I'll try to answer you questions:
>
> Robert Collins wrote:
> >
> > Hi,
> > this is address'd to Patrick R. McManus (I hope you're still
> > on the list Patrick :])
> >
> > I have few questions about your te branch of squid
> >
> > 1) where can I get the encoding library (ATHY_COMPRESSION) you
> > reference in it?
>
> It is not openly available. However, according to Patrick zlib should
> fit quite nicely without too much changes in the code.
Ah. ok.

> > 2) AFAICT the encoding/decoding is only done at one place in squid
> > - just before the data is sent to the client in client_side. Have
> > I missed something, or will the retrieved objects be cached in the
> > T-encoded form? Looking at it I thought the thing to do would be
> > un-encode to 'chunked' or non-encoded in the server-side code, so
> > the cached form is unencoded, and then encode as appropriate for
> > the client. What I can see at the moment is that the code can only
> > handle one set of nested transformations and that is done only at
> > the client_side. From a perf tuning point of view I thought the
> > server side decode could be a) have a preferred disk storage format
> > (ie gzipped) and be given a te-list for the client_request - and
> > the data passed to client_side would be only unwrapped as much as
> > needed from the upstream response.
>
> My understanding is that the intent of the patch is ot normally just
> lets data pass straight thru, and only recodes data where absolutely
> needed.

It might be, but currently it just undoes and does 'dumbly'.

>
> Sure, there are potential to use transfer-encoding to introduce
> hop-by-hop compression and such things.

His patch does that (with the athy library..)

>
> Disk storage compression belongs in the filesystem layer I think.

I agree - I'm not planning to touch that aspect (I was just thinking out loud) - isn't the reply path something like

orgin server -> our upstream socket
socket to fs layer
fs layer->list of listening clients

I'm suggesting that the fs layer only unwrap as much as needed, and _perhaps_ keep the compressed version.

>
> > 3) if it's idle & not under development, any objections to my setting
> > a tag (ie TE_OLD_20010104) on the current source and running
> > with it unguided? I really want to finish up Digest (rfc 2617)
> > authentication, but that requires trailer capability - which requires
> > chunked encoding. ... I could always start from scratch & just put
> > enough code in to chunk the data, but I see little point in
> > duplicating effort and only doing a half job at that...
>
> Sure, go ahead.
>
> The tag should be
> te-20010104
> (policy: sub-tags for a branch always begins with the branch name
> followed by - or _)

Just to be clear. I'm going to set a tag te-20010104 for his old work, and my work will be on the te tag itself.

In fact what I'm going to do is
te - core te header support, chunked encoding only, API for adding new chunking code (patricks code is somewhat modular, but not
grouped cleanly enough to be plugin like fs or repl code is).
te_compression - gzip/deflate support.
te_whatever new algorithm.

Sound ok guys? The reason is - I don't need compression for Digest... and splitting it allows cleaner work anyway..

Rob
Received on Thu Jan 04 2001 - 17:21:08 MST

This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:13:09 MST