Hello,
I have a problem that crops up with a single site (alas, https 
protocol). It's a subscription service to request financial information 
on companies. It works as follows: you add a series of filters to trim 
down the number of results to a suitable number: 50, 100, 400 results, 
whatever. This poses no problems. Once you're satisfied with the 
results, you can then either view the results in full, or ask for an 
export (e.g. to a "popular" spreadsheet).
 From time to time, requesting either of these things will result in a 
lengthy delay, followed by a "Connection reset by peer" message and a 
502 error logged by squid. But it usually works. I learnt from the tech 
support of the site in question is that such a request is being proxied 
from a back-end server.
After such a failure, a client refresh straight after will bring up the 
page correctly. I upgraded to STABLE14 yesterday, and I haven't been 
able to provoke the error since. But as it's intermittent, it's really 
hard to say.
Anyway, I think the problem isn't with Squid, I think it lies with the 
site. At the most it's some weird interaction between Squid and this 
site. But I need more evidence to argue my case.
My question is this: what magic debug string should I use to get some 
useful information into cache.log? I've tried random sections and depths 
by grepping the source, but I either get reams of useless information, 
or nothing.
My hypothesis is that their back-end server is getting overloaded, and 
is sending some sort of private keep-alive message to the front-end, and 
that's leaking out to my proxy, which can't make sense of it. I want to 
log more debug information about strange responses, basically. Any tips 
would be gratefully received.
Thanks,
David
-- "It's overkill of course, but you can never have too much overkill."Received on Wed May 24 2006 - 02:03:35 MDT
This archive was generated by hypermail pre-2.1.9 : Thu Jun 01 2006 - 12:00:02 MDT