Re: Ever Increasing memory but not freed

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks Glynn for your comments,

1) I have tried getrusage() command and it didn't work for slackware
12.2 (Linux 2.6.27.7 kernel). It was returning zero no matter what and
I read somewhere on the internet that it doesn't work for linux to get
memory usage. That's why I wrote my own function.
2) Yes, it is up to the kernel to release unused memory blocks on
demand but what I am wondering is why the program crashes every time
RSS reaches exactly the same amount which is almost 65MBs.
3) My system has 1GB of free RAM and free memory exists every time a
lot more than needed.
4) For some method calls RSS usage doesn't increase for subsequent
calls but for some it increases every time. Otherwise it wouldn't be a
problem if the memory usage stabilized on some level but it doesn't.
5) What is interesting is that in every method call RSS increases by
4K segments if it does. That made me think it is something related to
the newly created stack in each method call.

2010/1/28 Glynn Clements <glynn@xxxxxxxxxxxxxxxxxx>:
>
> Uğur ATA wrote:
>
>> I have a problem with a deamon process that I have written. It is
>> listening some text commands on a local socket and does some
>> background job. I am using the same executable to send commands to the
>> deamon process. What I observe is that it crashes after some time and
>> it is only when RSS size reaches 65500K.
>>
>> I suspected of a heap problem at first and i have used valgrind to
>> track memory issues. It helped me to find some minor memory leaks but
>> it didn't affect much. Later I have written a logging method to log
>> current memory usage when called as below:
>>
>> void getMemoryUsage(char *position) {
>>    char result[BUFFER_SIZE + 1];
>>    char* command =
>>    "ps -eaf | grep BSM | grep -v grep | awk '{print $2}' | xargs ps
>> -ly -p | tail -1";
>>
>>    FILE* ptr;
>>
>>    pthread_mutex_lock(&mutex_mem);
>>    if ((ptr = popen(command, "r")) != NULL) {
>>        fgets (result, BUFFER_SIZE, ptr);
>>        int len = strlen(result);
>>        result[len - 1] = '\0';
>>        log_debug("|%s||%s|\n", position, result);
>>    }
>>    pthread_mutex_unlock(&mutex_mem);
>>
>>    pclose(ptr);
>> }
>
> If you want to query memory usage for the current process, use
> getrusage().
>
>> Sample output for this is:
>>
>> [2010-01-28 12:31:19.162] |::loop1::||S     0 13967     1  0  80   0
>> 944  5170 pipe_w pts/1    00:00:00 limit|
>>
>>
>> What i recoginezed is that the RSS value increases in each different
>> method call by 4K in the first call only.
>>
>> For example in the following command RSS increases by 4K but only in
>> the first time. When it is called again RSS does not change:
>>    char* end_of_command = strchr(message, KEY_SOCKET_SEPERATOR);
>>
>> This is almost the same for every other method type. So I thought that
>> it is because of the stack increase in each call or something.
>
> The RSS is the amount of physical memory allocated to a process.
> Executables and libraries are loaded on demand; the first time that
> you call a function, the page of the the executable or library
> containing that function will be mapped into memory. Similarly for
> accessing a static variable.
>
> Once loaded into memory, the page will remain in memory until the
> kernel needs the memory for something else. At which point, an
> unmodified page will be discarded while a modified page will be
> written to swap (assuming that swap is enabled; if it isn't, modified
> pages will remain in physical memory for the lifetime of the process).
>
> Similarly, memory allocated via brk() or anonymous-mmap() will only be
> backed by physical memory when accessed, and will be swapped out if
> it's needed for something else and hasn't been accessed recently.
>
>> The
>> problem is that RSS usage never drops when a method returns and keeps
>> increasing in each call
>
> That's quite typical if the system-wide demand for physical memory is
> low. The kernel will only try to recover physical memory from a
> process if it actually needs it for something else.
>
> Once a system has been running for a while, there will be very little
> "free" memory; most of it will have been allocated to something. E.g.
> here (on a very lightly-loaded system):
>
>             total       used       free     shared    buffers     cached
> Mem:       2009460    1958756      50704          0      84084    1639176
> -/+ buffers/cache:     235496    1773964
> Swap:      1048568      21768    1026800
>
> 50MB "free" out of a total of 2GB. But 1.7GB is used for buffers and
> cache, which is memory which could easily be recovered if it was
> needed for something else.
>
>> thus reaching near 65M and program crashes.
>>
>> Does anybody have a clue on what might be causing that weird behaviour?
>
> How much physical RAM do you have? 65M isn't a lot these days, but if
> that's all of your physical RAM, and you don't have swap enabled, the
> kernel will kill your process in order to recover its physical memory
> so that it can keep the system running.
>
> Physical memory usage seldom tells you anything about the behaviour of
> the process unless system-wide demand for physical memory is high.
> Virtual memory usage is far more relevant. Look at the process'
> /proc/pid/maps file to see how it's using virtual memory.
>
> --
> Glynn Clements <glynn@xxxxxxxxxxxxxxxxxx>
>
--
To unsubscribe from this list: send the line "unsubscribe linux-c-programming" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux Assembler]     [Git]     [Kernel List]     [Fedora Development]     [Fedora Announce]     [Autoconf]     [C Programming]     [Yosemite Campsites]     [Yosemite News]     [GCC Help]

  Powered by Linux