Re: Multiple request blocking

From: Henrik Nordstrom <hno@dont-contact.us>
Date: Sat, 30 Jun 2001 01:05:58 +0200

Seems to be a problem with refreshes when the object is already cached
but expired.

Not sure what to do about it. Not sure what to do about it, or if it is
of general interest to Squid in normal use outside this specific
benchmark.

Fellow Squid developers: What is your opinion on how/if/when we should
chain concurrent requests for the same object before even the status is
known?

--
Henrik Nordstrom
Squid Hacker
Eric Barsness wrote:
> 
> Thanks for the reply.  I should have been more clear in my description.  We
> are trying to do option A, as you describe below.
> 
> What the TPC-W benchmark specifies is that certain dynamic web pages can be
> cached for up to 30 seconds each.  There are approximately 48 of these
> pages and they each get requested around 12 times a second in Unisys'
> current TPC-W results (and that rate will probably increase greatly in the
> near future).  These dynamic pages may take up to a second or so to be
> generated, so we could see many requests for the same page in the very
> short period of time it takes to refresh the page in the cache.  In the
> case of the benchmark, we know that the page will always cacheable, and
> therefore only a single request would be enough.
> 
> After setting neighbors_do_private_keys to 0 and recompiling, we still see
> multiple requests getting through on a cache miss.  It also seems strange
> to me that we see a SWAPOUT before we see a RELEASE in the store log in
> some cases.  Here is an example:
> 
> Store Log
> 
> 993063160.404 RELEASE 00000503  200 993063129 993063129 993063159 text/html
> 7838/7838 GET http://127.0.0.1:7784/w.pgm?3&SUBJECT_STRING=REFERENCE&IFRAME
> =
> 993063160.404 SWAPOUT 00000526  200 993063159 993063160 993063189 text/html
> 7819/7819 GET http://127.0.0.1:7784/w.pgm?3&SUBJECT_STRING=REFERENCE&IFRAME
> =
> ...
> 993063189.728 SWAPOUT 00000523  200 993063189 993063189 993063219 text/html
> 7884/7884 GET http://127.0.0.1:7784/w.pgm?3&SUBJECT_STRING=REFERENCE&IFRAME
> =
> 993063189.868 RELEASE 00000523  200 993063189 993063189 993063219 text/html
> 7884/7884 GET http://127.0.0.1:7784/w.pgm?3&SUBJECT_STRING=REFERENCE&IFRAME
> =
> 993063189.868 SWAPOUT 00000523  200 993063189 993063189 993063219 text/html
> 7784/7784 GET http://127.0.0.1:7784/w.pgm?3&SUBJECT_STRING=REFERENCE&IFRAME
> =
> 993063189.914 RELEASE 00000523  200 993063189 993063189 993063219 text/html
> 7784/7784 GET http://127.0.0.1:7784/w.pgm?3&SUBJECT_STRING=REFERENCE&IFRAME
> =
> 993063189.914 SWAPOUT 00000523  200 993063189 993063189 993063219 text/html
> 7854/7854 GET http://127.0.0.1:7784/w.pgm?3&SUBJECT_STRING=REFERENCE&IFRAME
> =
> 993063190.155 RELEASE 00000523  200 993063189 993063189 993063219 text/html
> 7854/7854 GET http://127.0.0.1:7784/w.pgm?3&SUBJECT_STRING=REFERENCE&IFRAME
> =
> 993063190.155 RELEASE 00000526  200 993063159 993063160 993063189 text/html
> 7819/7819 GET http://127.0.0.1:7784/w.pgm?3&SUBJECT_STRING=REFERENCE&IFRAME
> =
> 993063190.155 SWAPOUT 00000537  200 993063189 993063190 993063219 text/html
> 7884/7884 GET http://127.0.0.1:7784/w.pgm?3&SUBJECT_STRING=REFERENCE&IFRAME
> =
> ...
> 
> Access Log
> 
> 993063160.448    749 8.5.128.26 TCP_REFRESH_MISS/200 8079 GET
> http://127.0.0.1:7784/w.pgm?3&SUBJECT_STRING=REFERENCE&IFRAME= -
> DIRECT/127.0.0.1 text/html
> 993063160.881      4 8.5.128.28 TCP_MEM_HIT/200 8086 GET
> http://127.0.0.1:7784/w.pgm?3&SUBJECT_STRING=REFERENCE&IFRAME= - NONE/-
> text/html
> ...
> 993063189.757    724 8.5.128.27 TCP_REFRESH_MISS/200 8144 GET
> http://127.0.0.1:7784/w.pgm?3&SUBJECT_STRING=REFERENCE&IFRAME= -
> DIRECT/127.0.0.1 text/html
> 993063189.904    702 8.5.128.26 TCP_REFRESH_MISS/200 8044 GET
> http://127.0.0.1:7784/w.pgm?3&SUBJECT_STRING=REFERENCE&IFRAME= -
> DIRECT/127.0.0.1 text/html
> 993063189.960    702 8.5.128.29 TCP_REFRESH_MISS/200 8114 GET
> http://127.0.0.1:7784/w.pgm?3&SUBJECT_STRING=REFERENCE&IFRAME= -
> DIRECT/127.0.0.1 text/html
> 993063189.966      6 8.5.128.28 TCP_MEM_HIT/200 8113 GET
> http://127.0.0.1:7784/w.pgm?3&SUBJECT_STRING=REFERENCE&IFRAME= - NONE/-
> text/html
> 993063190.059      4 8.5.128.30 TCP_MEM_HIT/200 8121 GET
> http://127.0.0.1:7784/w.pgm?3&SUBJECT_STRING=REFERENCE&IFRAME= - NONE/-
> text/html
> 993063190.149      2 8.5.128.28 TCP_MEM_HIT/200 8121 GET
> http://127.0.0.1:7784/w.pgm?3&SUBJECT_STRING=REFERENCE&IFRAME= - NONE/-
> text/html
> 993063190.197    660 8.5.128.21 TCP_REFRESH_MISS/200 8144 GET
> http://127.0.0.1:7784/w.pgm?3&SUBJECT_STRING=REFERENCE&IFRAME= -
> DIRECT/127.0.0.1 text/html
> 993063190.604      4 8.5.128.29 TCP_MEM_HIT/200 8151 GET
> http://127.0.0.1:7784/w.pgm?3&SUBJECT_STRING=REFERENCE&IFRAME= - NONE/-
> text/html
> ...
> 
> Eric Barsness, IBM eServer iSeries Systems Performance
> ericbar@us.ibm.com
> ----- Forwarded by Eric Barsness/Rochester/IBM on 06/26/2001 03:03 PM -----
> 
>                     Henrik
>                     Nordstrom            To:     Eric Barsness/Rochester/IBM@IBMUS
>                     <hno@hem.passa       cc:
>                     gen.se>              Subject:     Re: Multiple request blocking
>                     Sent by:
>                     hno@hem.passag
>                     en.se
> 
> 
>                     06/25/2001
>                     11:34 AM
> 
> 
> 
> Not sure exacly what you want to do.
> 
> A) Do you want to force Squid to join parallell requests for the same
> page into one, even if it is not yet known that the object is cachable?
> 
> or
> 
> B) Do you want to stop Squid from merging concurrent requests for the
> same cachable page?
> 
> An object is determined to be cachable by looking at the reply headers.
> As soon as it has been determined deretmined that an object is cachable
> any future requests received for this same object will join the first
> request as a cache hit.
> 
> By using the hack mentioned below Squid by default assumes object are
> cachable, and thus should allow requests to join even before the reply
> headers have been receied, thereby accomplishing 'A'. At least in
> theory... BUT this opens a small window where uncachable objects may be
> given as cache hits to the next requestor..
> 
> I don't think there is any way to accomplish 'B' short of making the
> object NOT cachable.
> 
> --
> Henrik Nordstrom
> Squid Hacker
> 
> Eric Barsness wrote:
> >
> > Mr. Nordstrom,
> > I'm curious as to whether an option, as you mentioned below, has been
> added
> > to squid.conf to allow this multiple request blocking feature?  We at IBM
> > tried your suggestion below but were unable to get it to work.  We are
> > using a single Squid instance without any peer or parent caches, and are
> > using Squid as an accelerator only.
> >
> > The reason that Mr. Chan from Unisys, and we at IBM, would like this
> > feature is so that we can continue to use Squid in our TPC-W benchmark
> > configurations.  Without the ability to block multiple requests for the
> > same page, we must use a different product.  This is due to the
> > requirements placed on us by the benchmark specification.  I can
> elaborate
> > if you'd like.
> >
> > I thank you for any information you can provide...
> >
> > <<Mailing list thread begins here>>
> >
> > RE: [SQU] Multiple request blocking
> >
> > From: Chan, Alan S (Alan.Chan2@unisys.com)
> > Date: Wed Oct 04 2000 - 16:21:01 MDT
> >
> > This work great!
> > However, I ran into another problem. I need the page to live for 30
> > seconds. I added a Last-Modified with current time and a cache control
> > value of max-age=30 in the header. In the squid.conf I have
> > "refresh_pattern . 0 50% 1". The pages are refreshing every 113 seconds.
> > Can you shred some light on it?
> >
> > Thank you very much!
> >
> > Alan
> >
> > > -----Original Message-----
> > > From: Henrik Nordstrom [mailto:hno@hem.passagen.se]
> > > Sent: Wednesday, October 04, 2000 12:13 AM
> > > To: Chan, Alan S
> > > Cc: squid-users@ircache.net
> > > Subject: Re: [SQU] Multiple request blocking
> > >
> > >
> > > Chan, Alan S wrote:
> > > >
> > > > With squid is there a way to block multiple requests from being
> > > > forwarded to the internet, in case a similar request had
> > > been received
> > > > earlier, been forwarded to the internet, and the response
> > > is being waited
> > > > upon?
> > > >
> > > > This feature would really help for heavily hit web pages
> > > that have a long
> > > > retrieval time, and would save multiple requests from
> > > unnecessarily being
> > > > forwarded to the web-site.
> > >
> > >
> > > Maybe this will work:
> > >
> > > edit global.h and change the one after
> > > neighbors_do_private_keys to a 0,
> > > then "make install".
> > >
> > > If it does then an option for this should perhaps be added to
> > > squid.conf..
> > >
> > >
> > > Please note that it will also give problems with stalled requests as
> > > browsers does not allow the user to make a forced reload until they at
> > > least have got the headers of the page..
> > >
> > > --
> > > Henrik Nordstrom
> > > Squid hacker
> > >
> >
> > <<Mailing list thread ends here>>
> >
> > Eric Barsness, IBM eServer iSeries Systems Performance
> > ericbar@us.ibm.com
Received on Fri Jun 29 2001 - 17:08:57 MDT

This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:14:04 MST