Re: [squid-users] Re: slow browsing in centos 6.3 with squid 3 !!

From: Amos Jeffries <squid3_at_treenet.co.nz>
Date: Mon, 25 Feb 2013 23:39:33 +1300

On 25/02/2013 9:18 p.m., Ahmad wrote:
> Amos Jeffries-2 wrote
>> On 25/02/2013 12:30 a.m., Ahmad wrote:
>>> hello ,
>>> thanks Amos , ive modified the config file as u suggested .
>>> after removing the raid 0 , ive noted a better performance .
>>> =============================================================
>>> in general , browsing speed is lower than the speed in the absence of
>>> squid
>>> , but any way it is acceptable and i wish to enhance it as i can !
>>> ======================================================
>>> As i mentioned in the beginning , i have an excellent hardware with about
>>> 32
>>> G ram.
>>> but i have major problem in squid-guard !!
>>> after sometime it begins to bypass!!!!!!
>>> i searched to use dansguardian instead of squid-guard but it seems that
>>> dansguardian is not compatible with tproxy !!===> seems as shook to me !
>>> ==================================================
>>>
>>> i have pumped only 1000 users with about 150-180 M only !!!!
>>> here is the log of squidguard !
>>> ==============
>>> 2013-02-24 06:25:32 [17282] Warning: Possible bypass attempt. Found
>>> multiple
>>> slashes where only one is expected:
>>> http://surprises.tango.me/ts//assets/ayol_fairy_gingerbread_surprise_2-UI_VG_SELECTOR_PACK-android.zip
>> Ah I see. SquidGuard is detecting what it reports as "bypass attempt".
>>
>> This is NOT squidguard being bypassed.
>>
>> There is a type of Web server attack *called* a "bypass attack" which
>> was designed to use multiple slashes like // or ./ or ../ to trick
>> simple URL matching security rules (like Squidguard appears to be using)
>> into ignoring parts of the URL. Any pattern match regex which you are
>> applying on the URL looking for the "http://" by ignoring the "http:"
>> portion and identifying the "//" portion as the start will ignore the
>> real domain name, attack login details, and maybe some of the path.
>>
>> However "//" is not necessarily a wrong patten. The author of the
>> website determines what the URL syntax is, so if the web server the URL
>> is supposed to be handled by can cope with it correctly that is a valid
>> URL.
>>
>>> 2013-02-24 06:27:04 [17282] Warning: Possible bypass attempt. Found a
>>> trailing dot in the domain name:
>>> http://www.google.ps/xjs/_/js/s/sy15,gf,adnsp,wta,sy5,sy45,sy47,sy6,sy50,sy46,sy51,sy7,sy48,sy53,sy54,sy49,sy52,adct,ssi/rt=j/ver=OMt9IcC1O10.en_US./am=CA/d=0/sv=1/rs=AItRSTOekKHDXRJiLDzqcQkCe4C3pVWkbw
>> "Trailing dot" ??
>>
>> Oh I see. .http://.... C1O10.en_US./
>>
>> Whatever URL match squidGuard is testing there is *VERY* broken. Only
>> [a-zA-Z0-9\-\.\:] are permitted characters in domain names (or raw-IP
>> whch can also be there). squidGuard pattern is currently is allowing _ ,
>> / = and probably # and ? as well I guess.
>> You need to fix that pattern *immediately* regardless of whatever else
>> you do about squidGuard.
>>
>>> [root_at_squid ~]#
>>> ==============================
>>> here is a sample of cache.log file:
>>> {Accept: */*
>>> Content-Type: application/x-www-form-urlencoded
>>> 2013/02/24 06:24:18| WARNING: HTTP header contains NULL characters
>>> {Accept:
>>> */*
>>> Content-Type: application/x-www-form-urlencoded}
>>> NULL
>>> {Accept: */*
>>> Content-Type: application/x-www-form-urlencoded
>>> 2013/02/24 06:24:18| WARNING: HTTP header contains NULL characters
>>> {Accept:
>>> */*
>>> Content-Type: application/x-www-form-urlencoded}
>>> NULL
>>> {Accept: */*
>>> Content-Type: application/x-www-form-urlencoded
>>> 2013/02/24 06:24:18| WARNING: HTTP header contains NULL characters
>>> {Accept:
>>> */*
>>> Content-Type: application/x-www-form-urlencoded}
>>> NULL
>>> {Accept: */*
>>> Content-Type: application/x-www-form-urlencoded
>>> 2013/02/24 06:24:18| WARNING: HTTP header contains NULL characters
>>> {Accept:
>>> */*
>>> Content-Type: application/x-www-form-urlencoded}
>>> NULL
>>> {Accept: */*
>>> Content-Type: application/x-www-form-urlencoded
>>> 2013/02/24 06:24:41| clientProcessRequest: Invalid Request
>>> 2013/02/24 06:25:00| clientProcessRequest: Invalid Request
>>> 2013/02/24 06:25:04| clientProcessRequest: Invalid Request
>>> 2013/02/24 06:25:07| clientProcessRequest: Invalid Request
>>> 2013/02/24 06:25:09| helperHandleRead: unexpected reply on channel 0 from
>>> redirector #1 ''
>> The squidGuard helper is sending Squid more lines of response than Squid
>> sent lines of requests.
>> It looks like something is causing an extra newline at the end of a
>> response.
>>
>> The above happening will cause that squidGuard helper to be killed and a
>> new one to be started. This process will slow down your Squid with a
>> small pause as the new helper is started. If it happens often that could
>> be a large part of your speed problem.
>>
>>
>> Amos
>
> Hi Mr Amos ,
> thanks very much for explanation .
> thanks Marcus ,
>
> so ,
>
> you mentioned that i have to fix the ... and // in squidguard !! how could
> i fix it ??!!!!!!

I'm not sure. It will be something in the squidguard rules or blocklists.

> i want to say something !
>
> ive removed squid 1.4 and installed squidguard 1.5 beta version .
>
> after that ,
> no bypass happened :)
> i mean that it was seem to be problem of squidguard .
> i read that there is a bugs in squidguard in bypass , and i found squid 1.5
> is better .
> i pumped 2000 users to squid with BW 200M and no by pass occured
>
> this is one issue ,
>
> now lets return to the issue of slow browsing ,
> agian , the browsing is not very bad , but it is acceptable anyway and less
> quality than in absense of squid .
>
> i dont know if it was because of my hardsiks !!
> my disks are as bellow :
> hd1==>ssd with 180 G as operating system
> hd2==>sata with 560 G as /cache1 storage
> hd3==>sata with 560 G as /cache2 storage
> hd4===>sata with 560 G as /cache3 storage
>
> now i dont know if i need more hardiks additional to hd2, hd3 , hd4 ?
> or i need to replace them by ssd ?
> or i need to use another file system to enhacne the speed ?
>
> You may advice me Mr Amos about the best choice :)

Sorry there I can't. I've only played around a bit with one or two
filesystems and RAID / non-RAID configuration of HDD to see what worked
best for my particular needs (which are old cheap boxes thrown into
motels and shops as captive portal wireless gateways and distributed CDN
nodes - so nothing big on speed required). Most of what I know is just a
few details people have commented on hereabouts over the last few years.

I would suggest you locate the tools to measure I/O performance of your
drives before you rush off and change anything else about them. Then you
will have a way to identify if any change is working for better or worse
speeds.

> ===========================
> now , after all of modification i did,
> i mean after i used squidguard 1.5 beta , i will post my logs of squidguard
> and cache..log
> note that im still using squid 3.1.0 , i downloaded it by yum install !

Then this will be the next step of performance tuning. Upgrading your
squid 3.1 to a squid 3.3 which should perform much faster.
If yum is not presenting you with anything newer than 3.1, you will have
to dig around the Internet an locate an rpm or try your hand at building it

> ======================================
>
> do i need to increase the redirector in squidguard ???

No.

As said the bypass message comign out of squidGuard is NOT about bypass
happening, it is an attack type.

squidguard may be bypassed (ie skipped squidguard lookup) due to
redirector_bypass setting in your Squid being set to ON.
Just turn redirector_bypass OFF to avoid that happening in future.

Yes, client traffic *is* slowed down by being processed by Squid. Just
from the fact of parsing HTTP takes longer than queueing TCP packets.
About 10-50ms is normal on MISS requests I think, depending on what
processing Squid has to perform.

Amos
Received on Mon Feb 25 2013 - 10:39:45 MST

This archive was generated by hypermail 2.2.0 : Mon Feb 25 2013 - 12:00:05 MST