Re: [squid-users] Squid crashing when access.log hits 2GB.

From: Hendrik Voigtländer <hendrik@dont-contact.us>
Date: Wed, 26 May 2004 22:39:59 +0200

It seems to be more than just the FS:

http://www.suse.de/~aj/linux_lfs.html

I would try mkfile or dd to create huge files to test FS limits.

How about disabling the logs when running load tests? This will solve
the problem, but probably falsify your results as it may improve
performance.

Regards, Hendrik

jimtorelli@comcast.net wrote:

> Ahh!
>
> That would make sense.. We're acutally using EXT3 with RAID 1+0 on the filesystem which carries the logs.. Does reiserfs have these same barriers?
>
> Thanks!
> jim
>
>
>
>>Yes, with a bind and ext2 and a misconfigured logrotate :-)
>>What filesystem are you using? Probably the squid log is hitting an
>>FS-barrier.
>>
>>jimtorelli@comcast.net wrote:
>>
>>
>>>Hey everyone!
>>>
>>>Quick questions.. Two times in a row now, whie running load tests, Squid dies
>>
>>as soon as the access.log file gets to 2 gigs with the following error message:
>>
>>>FATAL: logfileWRite: /usr/loca/squid/var/logs/access.log:(0) Success.
>>>
>>>Has anyone ever seen this before?
>>>
>>>I'm running on an HP-DL380 running RHEL 2.1 AS Update 2. There's definaltly
>>
>>more than enought disk space.
>>
>>>Any thoughs?
>>>
>>>Thanks!
>>>jim
Received on Wed May 26 2004 - 14:40:21 MDT

This archive was generated by hypermail pre-2.1.9 : Tue Jun 01 2004 - 12:00:02 MDT