Re: [squid-users] Will Delayed Pools to help with squid fetching content?

From: Amos Jeffries <squid3_at_treenet.co.nz>
Date: Tue, 27 Jul 2010 05:09:24 +0000

On Mon, 26 Jul 2010 15:27:22 -0430, Jose Ildefonso Camargo Tolosa
<ildefonso.camargo_at_gmail.com> wrote:
> Hi!
>
> On Fri, Jul 23, 2010 at 8:31 PM, Amos Jeffries <squid3_at_treenet.co.nz>
> wrote:
>> Etienne Philip Pretorius wrote:
>>>
>>> Hello List,
>>>
>>> I am running Squid Cache: Version 3.1.3. and I wanted to cache windows
>>> updates and applied the suggested settings from
>>> http://wiki.squid-cache.org/SquidFaq/WindowsUpdate but now I am
>>> experiencing
>>> another problem.
>>>
>>> It seems that while I am able to cache any partial downloaded files
with
>>> squid now, I am flat-lining my break out onto the Internet. I just.
>>> wanted to
>>> check here before attempting to implement delayed pools. As I see it,
>>> it is
>>> squid fetching the file at maximum speed possible.
>>>
>>> So my question is, if I implement delayed pools for the client
>>> connections
>>> - will squid also fetch the files at those reduced rates?
>>
>> Not directly. Squid will still fetch the files it has to at full speed.
>> However, indirectly the clients will be delayed in their downloads so
>> will
>> spread their followup requests out over a longer time span than without
>> delays.
>
> I remember and old thread about a similar situation: it was a person
> who was trying to use squid for an ISP, but subscriber connections are
> a lot slower than ISP's connection to the Internet, and so: when a
> client started a download for a 600MB file, squid would fetch the
> whole file using a lot of bandwidth, and the client would not even be
> at 10% of the download, so.... if the client decided to cancel the
> download at say, 25%, there would be a lot of wasted bandwidth.

There were quite a few discussions about that problem. Windows updates
Vista service packs in particular bring this up, large media and ISO not
being so much HTTP transferred.

>
> Can that situation be corrected with delay pools? or, what do you need

Which scenario are you asking about here? the download aborting problem is
worked around by some voodoo with quick_abort and range_offset_limit.

> to correct that? The desired behavior is that squid actually follows
> the download at the speed of the fastest client!, instead of its
> connection to the Internet.

When the fastest client is a 56K modem or similarly slow that behaviour
makes downloading a movie ISO a huge ongoing waste of FD and RAM resources
(in-transit objects are stored both in memory as well as cache-backed).
The current behaviour is designed to grab fast and release server-facing
resources for re-use while the clients are spoon-fed. It's great when
accompanied by collapsed forwarding (which unfortunately 3.x does not yet
have).

I think the more desired behaviour is a server bandwidth limit. IIRC this
was added to 2.HEAD for testing.

Amos
Received on Tue Jul 27 2010 - 05:09:28 MDT

This archive was generated by hypermail 2.2.0 : Thu Jul 29 2010 - 12:00:04 MDT