Re: [SQU] Could squid cause download flood?

From: Edinilson J. Santos <edinilson@dont-contact.us>
Date: Mon, 23 Oct 2000 09:29:30 -0300

I'm having a similar problem, but with Squid 2.3 Stable 4
Look at my mrtg graphics, http://www.atinet.com.br/mrtg/public

When the link reach 100%, only few clients are acessing but squid was
consuming the bandwidth.

thanks

Edinilson

------------------------------------------
ATINET-Afiliado UOL de Atibaia
Rua Francisco R. Santos, 54 sala 3
ATIBAIA/SP Cep: 12940-000
Tel: (0xx11) 4412-0876
http://www.atinet.com.br

----- Original Message -----
From: "Li Ni" <liny@nets.com.cn>
To: <squid-users@ircache.net>
Sent: Monday, October 23, 2000 3:50 AM
Subject: [SQU] Could squid cause download flood?

Hello every one.
I'm sorry for my poor English, but I've met a serious problem, and need some
comments from here.

I am a newbie, but I became a network administrator.
I am using squid as www proxy of my company.
The squid run very well when it's version is 2.2.STABLE1 which is shipped
with Redhat 6.0.
In August, I replace squid from 2.2.STABLE1 to 2.3.STABLE3 which support
'myport' option.

On August 24, I notice this kind of log
"WARNING: Disk space over limit: 409611 KB > 409600 KB" in cache.log.
So I download squid-2.3.stable3-storeExpiredReferenceAge.patch, and patch
the source. After complie, I start squid again, it seem to be all right
until 24 hour passed.

On August 25, about 2:30pm, download flood began, the router which can
monitor the net using show that there are many of web request and response
data flow from and into my proxy server, I'm sure there are not any other
service in the proxy server that can be used to access web except squid. On
5:35pm, cache.log got this
"2000/08/25 17:35:22| WARNING! Your cache is running out of filedescriptors"
until
"2000/08/26 08:10:26| WARNING! Your cache is running out of
filedescriptors".
There are 1024 filedescriptor in my linux system.

The download flood last 5 days, it totally cost 60G WWW data(counted by
router). After I restart the squid, the flood disapear, and never come
again.

I am not kept the raw access.log file, only kept the week reports that
generated by sqmgrlog weekly. From the reports the totally access data are
far less than 60G only about 10G.

Where the other so many data come from?
Did many of concurrent connections cause squid error?
Or sqmgrlog didn't analyse log file correctly?

I do not know why this happen.

You can get my cache.log from http://202.118.2.101:8080/cache.log
and my squid.conf from http://202.118.2.101:8080/squid.conf
and see my reports of august from
http://202.118.2.101:8080/squid-reports-weekly/

All help will be appreciated.

-Li Ni

--
To unsubscribe, see http://www.squid-cache.org/mailing-lists.html
--
To unsubscribe, see http://www.squid-cache.org/mailing-lists.html
Received on Mon Oct 23 2000 - 05:31:19 MDT

This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:55:53 MST