Re: [squid-users] solaris 10 process size problem

From: Mario Garcia Ortiz <mariog_at_absi.be>
Date: Wed, 13 Jan 2010 12:13:36 +0100

hello
thanks

do you run solaris 10 and your squid doesn't grow constantly in size?
so far our squid has reached 800MB in one week... then it reaches 4GB
it just simply crashes with a
FATAL xalloc error.
i am running latest 3.0 stable 21... i don't see how a older version
would work better.... can you provide your whole configure line? we
will test and what software do you need to install sunstudio???

i wish there was a way just to replace the malloc library so that the
process won't keep increasing in size to the point it's too big for
the OS to handle.

2010/1/13 Struzik Wojciech <bm9ib2r5_at_gmail.com>:
> maybe its better to use squid 2.7 on solaris 10 ;) its run quite and stable
> :)
>
>
>>>is there any recommended options while compiling on solaris 10?
>
> i use this option to compile squid but in version 2.7
>
> PATH="/opt/SUNWspro/bin:`cat /etc/pathrc`"; export
> PATH
> CC="/opt/SUNWspro/bin/cc"; export
> CC
> CFLAGS="-xarch=amd64 -xtarget=generic -xspace -xO4 -xildoff -xc99=all -m64
> -I/opt/csw/include -I/usr/sfw/include"
> LDFLAGS="-L/opt/csw/lib -L/usr/sfw/lib -R/usr/sfw/lib"; export
> LDFLAGS
>
>
>
>
> On Tue, Jan 12, 2010 at 5:34 PM, Mario Garcia Ortiz <mariog_at_absi.be> wrote:
>>
>> Hello
>> I have still this problem with the memory leak on solaris, the server
>> has not crashed but since wednesday that i have restarted the proxy
>> server the size of squid process is between 600M and 800M.
>>
>> I have read information over an alternative malloc library, most
>> specifically the multi-theaded mtmalloc present on solaris 10 system.
>>
>> my question is it possible to launch solaris to use this malloc
>> library (with the use of LD_PRELOAD
>>
>> when I use ldd /usr/local/squid/sbin/squid i see it used the
>> single-threaded malloc library;
>> libmalloc.so.1 =>        /usr/lib/libmalloc.so.1
>>
>> do I have to compile a new version of squid and add a malloc switch.
>> or use the LD_PRELOAD  will suffice to launch the multi-threaded malloc
>> library.
>>
>> what else can I do to determine why theres a memory leak and why my
>> process size squid becomes so big, (it eventually crashes when it
>> reaches 4GB)
>>
>>
>> thank you very much for your help. this is of paramount importance
>>
>> kindest regards
>>
>> Mario G
>>
>> 2010/1/4 Mario Garcia Ortiz <mariog_at_absi.be>:
>> > Hello
>> >
>> > here are the compile option used to build squid on the solaris system:
>> > there are no compilation error or warnings :
>> >
>> > $ ./configure CFLAGS=-DNUMTHREADS=60 --prefix=/usr/local/squid
>> > --enable-snmp --with-maxfd=32768 --enable-removal-policies
>> > --enable-useragent-log --enable-storeio=diskd,null
>> >
>> > the return from squid client concerning process data size via sbrk are
>> > at the moment of  1213034 KB (more than 1GB) this is huuuge for a
>> > process.
>> >
>> > the OS return similar parameters with the top command:
>> >
>> >  PID USERNAME LWP PRI NICE  SIZE   RES STATE    TIME    CPU COMMAND
>> >  12925 root       1  45    0 1113M 1110M sleep  218:44  7.33% squid
>> >  12893 root       1  54    0 1192M 1188M sleep  222:08  5.81% squid
>> >
>> > here is the output of the command ps -e -o pid, vsz, comm
>> >
>> > pid      vsz        command
>> > 12925 1139948 (squid)
>> > 12893 1220136 (squid)
>> >
>> > when the size reaches 4GB the squid process crash with the errors :
>> >
>> > FATAL: xcalloc: Unable to allocate 1 blocks of 4194304 bytes!
>> >
>> > Squid Cache (Version 3.0.STABLE20): Terminated abnormally.
>> > CPU Usage: 91594.216 seconds = 57864.539 user + 33729.677 sys
>> > Maximum Resident Size: 0 KB
>> > Page faults with physical i/o: 0
>> > Memory usage for squid via mallinfo():
>> >        total space in arena:  -157909 KB
>> >        Ordinary blocks:       691840 KB 531392 blks
>> >        Small blocks:            4460 KB 184700 blks
>> >        Holding blocks:            50 KB   1847 blks
>> >        Free Small blocks:        696 KB
>> >        Free Ordinary blocks:  -854957 KB
>> >        Total in use:          696351 KB -440%
>> >        Total free:            -854260 KB 541%
>> >
>> >
>> > the custom configuration parameter is
>> > cache_dir null /path/to/cache
>> > cache_mem 512 MB --> i am going to lower this to 128, maybe that is the
>> > cause.
>> >
>> > I am compiling squid stable 21.. in order to see if there is some
>> > improvement.
>> >
>> > is there a possibility to have information in a provoqued dumped core
>> > i plan to do.. because we usually have to wait 3 to 4 weeks in order
>> > to have the problem reproduce by itself. but we see that the process
>> > grows.. in a few days is already 1GB.
>> >
>> >
>> > thank you very much for your collaboration.
>> >
>> >
>> >
>> > 2009/12/31 Amos Jeffries <squid3_at_treenet.co.nz>:
>> >> Mario Garcia Ortiz wrote:
>> >>>
>> >>> Hello
>> >>> thank you very much for your answer. the problem is that squid grows
>> >>> contantly on size, so far is already at 1.5GB and it has been
>> >>> restarted monday.
>> >>> i will try to provoke a a dump core so i can send it to squid.
>> >>
>> >> Squid is supposed to allow growth until the internal limit is reached.
>> >> According to those stats only 98% of the internal storage limit is
>> >> used.
>> >>
>> >> Anything you can provide about build options, configuration settings,
>> >> and
>> >> what the OS thinks the memory usage is will help limit the problem
>> >> search
>> >> down.
>> >>
>> >>>
>> >>> in the meanwhile i will upgrade squid to the latest stable 21. is
>> >>> there any recommended options while compiling on solaris 10?
>> >>
>> >> Options-wise everything builds on Solaris. Actual usage testing has
>> >> been a
>> >> little light so we can't guarantee anything as yet.
>> >>
>> >> Some extra build packages may be needed:
>> >> http://wiki.squid-cache.org/KnowledgeBase/Solaris
>> >>
>> >>> as using
>> >>> an alternate malloc library?
>> >>
>> >> If you are able to find and use a malloc library that is known to
>> >> handle
>> >>  memory allocation 64-bit systems well it would be good. They can be
>> >> rare on
>> >> some systems.
>> >>
>> >>
>> >> Amos
>> >>
>> >>>
>> >>> 2009/12/31 Amos Jeffries <squid3_at_treenet.co.nz>:
>> >>>>
>> >>>> Mario Garcia Ortiz wrote:
>> >>>>>
>> >>>>> Hello
>> >>>>> thank you very much for your help.
>> >>>>> the problem occurred once  the process size reached 4Gbytes. the
>> >>>>> only
>> >>>>> application running on the server is the proxy, there are two
>> >>>>> instances running each one in a different IP address.
>> >>>>> there is no cache.. the squid was compiled with
>> >>>>> --enable-storeio=diskd,null and in squid.conf :
>> >>>>> cache_dir null /var/spool/squid1
>> >>>>>
>> >>>>> as for the hits i assume there are none since there is no cache am I
>> >>>>> wrong?
>> >>>>> here is what i get with mgr:info output from squidclient:
>> >>>>>
>> >>>>> Cache information for squid:
>> >>>>>       Hits as % of all requests:      5min: 11.4%, 60min: 17.7%
>> >>>>>       Hits as % of bytes sent:        5min: 8.8%, 60min: 10.3%
>> >>>>>       Memory hits as % of hit requests:       5min: 58.2%, 60min:
>> >>>>> 60.0%
>> >>>>>       Disk hits as % of hit requests: 5min: 0.1%, 60min: 0.1%
>> >>>>>       Storage Swap size:      0 KB
>> >>>>>       Storage Swap capacity:   0.0% used,  0.0% free
>> >>>>>       Storage Mem size:       516272 KB
>> >>>>>       Storage Mem capacity:   98.5% used,  1.5% free
>> >>>>>       Mean Object Size:       0.00 KB
>> >>>>>       Requests given to unlinkd:      0
>> >>>>>
>> >>>>>
>> >>>>> I am not able to find a core file in the system for the problem of
>> >>>>> yesterday.
>> >>>>> the squid was restarted yesterday at 11.40 am and now the process
>> >>>>> data
>> >>>>> segment size is 940512 KB.
>> >>>>>
>> >>>>> i bet that if i let the process to reach 4GB again the crash will
>> >>>>> occur? maybe is this necessary in order to collect debug data?
>> >>>>>
>> >>>>> thank you in advance for your help it is very much appreciated.
>> >>>>>
>> >>>>> kindest regards
>> >>>>>
>> >>>>> Mario G.
>> >>>>>
>> >>>> You may have hit a malloc problem seen in recent FreeBSD 64-bit.
>> >>>> Check what the OS reports Squid memory usage as, in particular
>> >>>> VIRTSZ,
>> >>>> during normal operation and compare to those internal stats Squid
>> >>>> keeps.
>> >>>>
>> >>>>
>> >>>> Amos
>> >>>>
>> >>>>> 2009/12/23 Kinkie <gkinkie_at_gmail.com>:
>> >>>>>>
>> >>>>>> On Wed, Dec 23, 2009 at 3:12 PM, Mario Garcia Ortiz
>> >>>>>> <mariog_at_absi.be>
>> >>>>>> wrote:
>> >>>>>>>
>> >>>>>>> Hello
>> >>>>>>> i have used all the internet resources available and I still can't
>> >>>>>>> find a definitive solution to this problem.
>> >>>>>>> we have a squid running on a solaris 10 server. everything run
>> >>>>>>> smoothly except that the process size grows constantly and it
>> >>>>>>> reaches
>> >>>>>>> 4GB yesterday after which the process crashed; this is the output
>> >>>>>>> from
>> >>>>>>> the log:
>> >>>>>>> FATAL: xcalloc: Unable to allocate 1 blocks of 4194304 bytes!
>> >>>>>>
>> >>>>>> [...]
>> >>>>>>
>> >>>>>>> i am eagestly looking forward for your help
>> >>>>>>
>> >>>>>> It seems like you're being hit by a memory leak, or there are some
>> >>>>>> serious configuration problems.
>> >>>>>> How often does this happen, and how much load is there on the
>> >>>>>> system?
>> >>>>>> (in hits per second or minute, please)
>> >>>>>>
>> >>>>>> Going 64-bit for squid isn't going to solve things, at most it will
>> >>>>>> delay the crash but it may cause further problems to the system
>> >>>>>> stability.
>> >>>>>>
>> >>>>>> Please see http://wiki.squid-cache.org/SquidFaq/BugReporting for
>> >>>>>> hints
>> >>>>>> on how to proceed.
>> >>>>>>
>> >>>>>> --
>> >>>>>>  /kinkie
>> >>>>>>
>> >>>>
>> >>>> --
>> >>>> Please be using
>> >>>>  Current Stable Squid 2.7.STABLE7 or 3.0.STABLE20
>> >>>>  Current Beta Squid 3.1.0.15
>> >>>>
>> >>
>> >>
>> >> --
>> >> Please be using
>> >>  Current Stable Squid 2.7.STABLE7 or 3.0.STABLE20
>> >>  Current Beta Squid 3.1.0.15
>> >>
>> >
>
>
Received on Wed Jan 13 2010 - 11:13:44 MST

This archive was generated by hypermail 2.2.0 : Wed Jan 13 2010 - 12:00:03 MST