Re: GET byte-range abuse

From: Philippe Strauss <philippe.strauss@dont-contact.us>
Date: Tue, 27 Jul 1999 14:00:46 +0200

On Mon, Jul 26, 1999 at 10:34:56PM +0200, Henrik Nordstrom wrote:
> Philippe Strauss wrote:
>
> > A way to stop such abuse, would be to track each request using
> > Range: and put all the Ranged request refering the same URL into a delay
> > pool, or a bandwidth shaper of any kind.
>
> Range requests is only one way to do this abuse. One other more obvious
> one is to run several independent downloads in parallell. There exists
> several download tools which allows you to queue a number of downloads
> while browsing, and then let downoad all the files in a big batch with a
> configurable amount of parallellism. I imagine that one of them now also
> features parallellism for one object using Range requests..
>
> > Is it a sensible idea?
>
> Perhaps in some situations, but I am afraid it will only provoke the use
> of another request pattern your filter does not match.
>
> A better fix is to change delay pools from delaying individual IPs to
> delaying individuals, but doing such a thing most likely requires the
> use of proxy authentication to identify the individual user.

Yes, but proxy authentication and transparent proxying are not good
friends.
But for sure it would be a better solution.
having an api so that an external daemon, looking for exemple in
a SQL databse, allow a maximal bandwidth for a particula user or
group of user.

In our network, an IP adress can represent a single user, but also a
whole network behind a firewall.
IP may be dynamic or static.

A kind of plugin architecture for managing QoS feature of squid
could be a more general solution to the precise problem I was
talking about.

Kind regards.

> --
> Henrik Nordstrom
> Spare time Squid hacker

-- 
Philippe Strauss, ingenieur reseau/systemes, Urbanet SA
philippe.strauss@urbanet.ch
tel +41 21 623 30 20
--
Received on Tue Jul 27 1999 - 05:56:27 MDT

This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:47:34 MST