Re: memory watermarks ?

From: Henrik Nordstrom <hno@dont-contact.us>
Date: Fri, 27 Oct 2000 02:26:49 +0200

[linux i386 glibc assumed in this message]

> if I understand right, then VM size and space in arena should match.
> As they don't then either system or mallinfo is lying.

Not quite.

VM size is the total address space set up for the process. This includes
  * data segment (sbrk)
  * mmap segments
    - mmap() mappings including shared libraries
    - mmap() allocations
  * shared memory
  * stack

Total space in arena only accounts for the data segment, not memory
allocated with mmap(). To get the total memory usage by malloc you have
to sum arena + hblkhd (arena + holding blocks). Any difference between
this and the VSZ is other mmap() entries and the stack.

I don't know how top calculates the SIZE column. I have not found any
process statistics which matches this, and have not bothered reading the
code.

Stack for threads are allocated dynamically as requred by the threads,
starting with 16K per thread. By default there is an upper bound
somewhere between 1 to 2MB (not sure exacly where the limit are, but the
stack segments are starting at 2MB intervals..).

Details of the virtual memory allocation can be found in
/proc/<nn>/maps. The data segment memory is the second entry of the
process image (the one with write permission). mmap() memory starts at
40000000, and the top of the stack is at c0000000. Threads allocates
their stacks below the normal stack.

> Maybe some system libs are using memory in a way that malloc lib
> cannot account for, and are leaking.

Which should be memory allocated using mmap(), as sbrk is controlled
exclusively by malloc, and is the value reported in "total space in
arena" by mallinfo IIRC.

> Maybe ps and top are summing differently: shared mem is 1M per PID,
> maybe top is counting this once, and ps for each thread?

Perhaps.

> Squid's own leaks should be reflected in mallinfo, I believe.

As long as we don't use mmap() then yes.

Please note that the VSZ can be quite large without actually using any
memory. This can be caused by having lots of mmap() resources or never
used allocated memory.

The following shows the status of a small program doing malloc() of 1000
MB.

$ ps -up 23911
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
henrik 23911 0.0 0.6 1025088 380 pts/0 S 01:50 0:00 ./a.out
1000
$ free
             total used free shared buffers
cached
Mem: 62844 34108 28736 8716 1308
20564
-/+ buffers/cache: 12236 50608
Swap: 104828 19056 85772
$ top
  1:55am up 10 days, 5:39, 3 users, load average: 0.10, 0.12, 1.10
53 processes: 50 sleeping, 3 running, 0 zombie, 0 stopped
CPU states: 15.3% user, 11.3% system, 0.0% nice, 73.2% idle
Mem: 62844K av, 36232K used, 26612K free, 10236K shrd, 1308K buff
Swap: 104828K av, 18476K used, 86352K free 21892K
cached

  PID USER PRI NI SIZE RSS SHARE STAT LIB %CPU %MEM TIME
COMMAND
23927 henrik 4 0 1064 1064 864 R 0 1.9 1.6 0:01 top
23911 henrik 0 0 380 380 312 S 20 0.0 0.6 0:00 a.out

mallinfo reports 1000MB+4KB in 1 holding blocks. The extra 4KB is from
the 8 bytes holding block header added by malloc.

/Henrik
Received on Thu Oct 26 2000 - 18:36:49 MDT

This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:12:52 MST