Re: [squid-users] Will Delayed Pools to help with squid fetching content?

From: Jose Ildefonso Camargo Tolosa <ildefonso.camargo_at_gmail.com>
Date: Wed, 28 Jul 2010 15:13:13 -0430

Greetings!

On Tue, Jul 27, 2010 at 12:39 AM, Amos Jeffries <squid3_at_treenet.co.nz> wrote:
> On Mon, 26 Jul 2010 15:27:22 -0430, Jose Ildefonso Camargo Tolosa
> <ildefonso.camargo_at_gmail.com> wrote:
>> Hi!
>>
>> On Fri, Jul 23, 2010 at 8:31 PM, Amos Jeffries <squid3_at_treenet.co.nz>
>> wrote:
>>> Etienne Philip Pretorius wrote:
>>>>
>>>> Hello List,
>>>>
>>>> I am running Squid Cache: Version 3.1.3. and I wanted to cache windows
>>>> updates and applied the suggested settings from
>>>> http://wiki.squid-cache.org/SquidFaq/WindowsUpdate but now I am
>>>> experiencing
>>>> another problem.
>>>>
>>>> It seems that while I am able to cache any partial downloaded files
> with
>>>> squid now, I am flat-lining my break out onto the Internet. I just.
>>>> wanted to
>>>> check here before attempting to implement delayed pools. As I see it,
>>>> it is
>>>> squid fetching the file at maximum speed possible.
>>>>
>>>> So my question is, if I implement delayed pools for the client
>>>> connections
>>>> - will squid also fetch the files at those reduced rates?
>>>
>>> Not directly. Squid will still fetch the files it has to at full speed.
>>> However, indirectly the clients will be delayed in their downloads so
>>> will
>>> spread their followup requests out over a longer time span than without
>>> delays.
>>
>> I remember and old thread about a similar situation: it was a person
>> who was trying to use squid for an ISP, but subscriber connections are
>> a lot slower than ISP's connection to the Internet, and so: when a
>> client started a download for a 600MB file, squid would fetch the
>> whole file using a lot of bandwidth, and the client would not even be
>> at 10% of the download, so.... if the client decided to cancel the
>> download at say, 25%, there would be a lot of wasted bandwidth.
>
> There were quite a few discussions about that problem. Windows updates
> Vista service packs in particular bring this up, large media and ISO not
> being so much HTTP transferred.
>
>>
>> Can that situation be corrected with delay pools? or, what do you need
>
> Which scenario are you asking about here? the download aborting problem is
> worked around by some voodoo with quick_abort and range_offset_limit.

Yes, I think I remember that from the old thread, the thing is: say
you have a 100Mbps Internet connection (which may be even larger on an
ISP), and give 384kbps to your clients (which is not too little), and
someone start downloading a 150MB file. That file would be downloaded
on the server side on around 12~20 seconds, but will take around 3200
seconds on the client, if the client gets disconnected (or just cancel
the download), after just 1 minute, the server would have downloaded
the 150MB, even if quick_abort and range_offset_limit are correctly
set, there will be wasted bandwidth.

The proxy will always be a good thing, even if it download whole files
ahead of clients, only if it clients *always* finish their downloads
(which, in at least 10% of the cases is not true), and if the ISP have
a correctly setup QoS system, ie, where you actually assure the
bandwidth for other services (collocated servers, warrantied bandwidth
services (enterprise links)), but if you don't, the proxy will just
make things worse (by wasting bandwidth, for both: you as ISP, and the
server from which the download is being served).

I think the proxy is just designed to have a relatively slow Internet
link, and a fast local network, it was not meant to be used by ISPs.

>
>> to correct that?  The desired behavior is that squid actually follows
>> the download at the speed of the fastest client!, instead of its
>> connection to the Internet.
>
> When the fastest client is a 56K modem or similarly slow that behaviour
> makes downloading a movie ISO a huge ongoing waste of FD and RAM resources
> (in-transit objects are stored both in memory as well as cache-backed).
> The current behaviour is designed to grab fast and release server-facing
> resources for re-use while the clients are spoon-fed. It's great when
> accompanied by collapsed forwarding (which unfortunately 3.x does not yet
> have).

Or maybe the fastest client is at 1Mbps, or 10Mbps, shouldn't matter,
the proxy shouldn't download more than, say, 2MB ahead of what it has
sent to the client (there could be an issue trying to keep the
connection to the server alive, so, I believe there should be a
minimum number of packets exchanged with the server, even if the
client got behind more than the 2MB).

O.K. so.... if I download a 4GB file, will the server keep the whole
4GB *in RAM*... that doesn't make any sense, because that would make
downloading large files impossible on servers with little memory. I
know the proxy will have file descriptors and sockets busy while the
download is ongoing (and other structures, internal to the proxy), but
to keep the whole object in-memory........

I like collapsed forwarding, it is cool (that's why I still run 2.x).

>
> I think the more desired behaviour is a server bandwidth limit. IIRC this
> was added to 2.HEAD for testing.

Server bandwidth limit.... Will take a look at it.

Ildefonso
Received on Wed Jul 28 2010 - 19:43:30 MDT

This archive was generated by hypermail 2.2.0 : Thu Jul 29 2010 - 12:00:04 MDT