Re: Data --> object store --> client

From: Joey Coco <anesthes@dont-contact.us>
Date: Fri, 9 Aug 2002 08:35:34 -0500 (EST)

Hi,

> Because it's essentially the only way that scales.
>
> If you do not transfer blocks of data as they become available, you have
> to:
> * Buffer (on memory or disk) the entire object
> * Keep the client from timing out somehow.
>
> The buffering issue is very serious: imagine 100 clients, all requesting
> a different 50 MB file at the same time: you would need 5GB of storage
> just to fulfil the request, whereas when you are sending smaller blocks
> of data, you can decide whether to keep or throw away the data that the
> client has, allowing a 486 with 100Mb of ram to serve the hypothetical
> example above - at (moderately) high speed.
>
> Also, *most* OS calls can not send 100MB files in one call.

Thats kinda what I figured, that it was generic for that reason alone. I
imagine a 50 meg file would cause a problem if it didn't throttle every
little bit at a time. But smaller content-types of text/html shouldn't
cause problems. Too bad the size of the request cannot really be
detected..

I'd like to be able to scan the entire html contents of the request before
it is sent to the client, but not care about large media objects..

-- Joe
Received on Fri Aug 09 2002 - 06:37:36 MDT

This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:16:02 MST