Re: [squid-users] Performance tunning !

From: Joe Cooper <joe@dont-contact.us>
Date: Thu, 27 Jun 2002 02:52:10 -0500

Henrik Nordstrom wrote:
> Joe Cooper wrote:
>
>
>>In the two cases I've seen it, setting a max MTU of 1476(?...1500 minus
>>whatever the GRE overhead is, which I think is 24 bytes) on the real
>>interface (i.e. not the gre interface) of the Squid machine makes the
>>problem disappear. Disabling MTU discovery did not have an impact.
>
>
> Odd... are you using GRE for the return traffic as well? I am not really
> sure how things pulls together on the host side when using the gre
> module..

No. Return traffic goes through eth0, possibly back through the router,
but possibly directly to the client (I think the problem only happens
when going back through the router to reach the clients, but I could be
wrong as I rarely have router access at my client sites, and even when I
do I know so little about IOS that I can't make much of it). gre0
receives the encapsulated packets from the router and decapsulates them,
where they are then picked up by the iptables rule (with an interface of
gre0 specified):

iptables -t nat -I PREROUTING -i gre0 -d 0/0 -p tcp --dport 80 -j
REDIRECT 3128

Or something like that, roughly.

The gre0 interface always has an MTU of 1476, whether specified or not,
when running over ethernet (as I mentioned, it is the interface MTU
minus 24 bytes for GRE encapsulation overhead). Anyway, the
documentation at Cisco regarding this issue indicated the solution which
I've used...or at least it's how I interpretted it. It worked, so I let
sleeping dogs lie. You and I have briefly discussed the issue in the
past here, with the link to the Cisco docs being referenced, and you had
some very lucid thoughts on it, maybe what you said then will clarify it
for us now. ;-)

>>It is likely that doing the same on the Cisco router side would also
>>have a similar effect. I've never tried it, so I can't say for sure.
>
>
> Probably not. To work around MTU problems by lowering the MTU you need
> to address it at the side seeing the problem, not the side causing the
> problem. To work around it at the side causing the problems you need to
> increase the MTU.

Which is impossible in this case (ethernet max of 1500, unless you go to
'fat' packets which aren't supported by most 10/100 ethernet cards).

My impression of the problem was that if the MTU was forced below the
cap imposed by the GRE encapsulation (either by the cache, or the
router) the problem would go away. But you raise a valid point, and I
don't know what the effect would be if making the change on the router
side for communication between cache and router.

I should point out that this is a pretty rare problem, in my experience.
  Out of about 15 WCCP deployments I've been directly involved in (and
some of our clients did it without my help, but I would have heard about
it if problems had surfaced), the problem has only shown itself twice.
I don't know the exact cause of it, but I suspect it has some relation
to return traffic going back through the router to reach the client. At
some sites where I /do/ know the router->cache and cache->client
topology, they are not running return traffic back through the router,
and have not experienced this problem. But I can't say for sure that
the sites that did hit the problem were going back through the router
for return traffic. So I have only one side of this equation proven...

-- 
Joe Cooper <joe@swelltech.com>
Web caching appliances and support.
http://www.swelltech.com
Received on Thu Jun 27 2002 - 01:53:29 MDT

This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 17:08:50 MST