Re: thoughts on memory usage...

From: David Luyer <luyer@dont-contact.us>
Date: Wed, 20 Aug 1997 12:11:47 +0800 (WST)

--MimeMultipartBoundary
Content-Type: TEXT/PLAIN; charset=US-ASCII

On Tue, 19 Aug 1997, Srecko Tahirovic wrote:

>Hello!
>
>> Opinions, comments, suggestions, anyone?
>
>We could take all alowed http characters (rfc2068), than convert them
>into
>something smaller. Basicly user defined base64. If we could use 8-bit to
>6-bit conversion, we would get 25 % reduction in size. (on top of 33%)
>
>But we should take a look at gzip for better compresion. With gzip I got
>67% compresion of URLs (4240448 bytes (with \n) => 1359490 bytes).

Gzip would want to compress URLs en-masse, and would use a (relatively)
huge amount of CPU time. This is a scheme for compressing URLs
one-by-one, keeping them as text, and not using too excessive CPU time.
Bit-shuffling, if done some way that still prevents a null occurring in
the stream, could be a good way to reduce the URL string memory impact
further tho. I'll make sure my "compression" results in something
permissible by rfc2068 so then a bit-shuffling technique could be applied
over the top and look at doing this.

David.

(ps: the code I posted earlier still had bugs, I'm gradually working them
out and optomizing a bit more)

--MimeMultipartBoundary--
Received on Tue Jul 29 2003 - 13:15:42 MDT

This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:11:22 MST