Re: Introduction / accelerator feature ideas

From: Henrik Nordstrom <hno@dont-contact.us>
Date: Fri, 21 Feb 2003 01:21:49 +0100

On Thursday 20 February 2003 22.53, Flemming Frandsen wrote:
> Henrik Nordstrom wrote:
> > Hi, and welcome to Squid-dev.
>
> Thanks, I'll try not to do too much faq asking:) but why is
> reply-to set to the poster rather than the list:)

reply-to is not set. This is intentional. Just remember to hit the
"reply to all" then responding to messages on the mailinglist and
everything is fine.

> One could also argue that allowing a user to have only one running
> request is a generally useful feature to stop one client from
> monopolizing the webservers time...

How this limit should look like heavily depends on the application,
but sure, a limit on one request at a time per user for a given URL
could be quite useful in a generic reverse proxy perspective to stop
people who click lime mad on the reload button from eating up
valuable resources, especially if the application cannot detect if
the client aborts the connection (see half_closed_clients in
squid.conf).

> > For this to work it must be very carefully specified how
> > to identify that the request is exacly the same and should be
> > allowed to take over the pending result for a previously aborted
> > identical requests.. and when to keep waiting for responses to
> > aborted requests in hope that the user simply retried the same
> > request..
>
> Hmm, well, in my case the response is never cachable and the
> locking would be by the users session id (provided by a cookie) so
> there isn't much ambiguity.

Lets narrow this down by example..

Assume two different POST requests by the same user to the same URL.
For example when the user realises he filled in something wrongly in
the form but only after pushing the submit button..

What should happen in such case?

> Hmm, I wasn't talking about having one apache pr client, or even
> tying each client to one server for all eternity, but rather
> perfering a server that had been used resently by the same client,
> to a cold one.

The question is if a suitable balance can be found where there is
sufficient idle connections making it likely there is a "warm"
connection for this user when he returns, or if all of "his"
connections will then already be busy for other users.. if your
application is heavy and takes a long time to respond in relation to
the amount of data per response you quite likely will need to
artificially increase the amount of connections squid->server to
increase the likelyhood that there is a connection to reuse for this
user.. somewhere there is a balance between keeping excess
connections and the overhead of having users being sent to server
instances not having the needed application data for this specific
user cached..

> Yes, it's related, actually it's the only way I can see for A to
> happen without the users madly reloading, but I wouldn't put it
> past them:)

All of the following ends up in the same pattern in HTTP:

* Repeated reload before the response is seen
* Submit, stop, Sumbit
* Submit, Submit

(submit == submitting a form or following a link, or other action
causing a new request to be sent)

In all cases the first request is aborted by closing the connection, a
new connection is opened and the "new" request is sent.

Similar things happens when the user uses the back button while a page
is being loaded, or simply first clicks on one link on a page and
then clicks on another link before the new resulting page is
received.

Regards
Henrik
Received on Thu Feb 20 2003 - 17:20:13 MST

This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:19:16 MST