Re: Memory pages not released by the filesystem after a truncate

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On Wed, Jul 6, 2016 at 6:24 AM, Houssem Daoud <houssem.daoud@xxxxxxxxxx> wrote:
Hi,

My system experiencing problems with atomic memory allocations. Device
drivers are not able to allocate contiguous memory regions due to a high
fragmentation level.

At the time of failure: /proc/meminfo shows the following information:
MemTotal: 4021820 Kb
MemFree: 121912 Kb
Active: 1304396 Kb
Inactive: 2377124 Kb

Most of the memory is consumed by the LRU inactive list and only 121 Mb
is available to the system.
By using a tracer, I found that most of the pages in the inactive list
are created by the ext4 journal during a truncate operation.
The call stack of the allocation is:
[
__alloc_pages_nodemask
alloc_pages_current
__page_cache_alloc
find_or_create_page
__getblk
jbd2_journal_get_descriptor_buffer
jbd2_journal_commit_transaction
kjournald2
kthread
]

The problem is easily reproducible using the following script:
#!/bin/bash
while true;
do
dd if=/dev/zero of=output.dat  bs=100M count=1
done

Is that a normal behavior ? I know that the philosophy of memory
management in Linux is to use the available memory as much as possible,
but what is the need of keeping truncated pages in the LRU if we know
that they are not even accessible ?

The problem of the inactive list growth occurs only with the journal
mode of ext4, not with the write-back mode.

A chart representing the utilization of memory during the test is
provided in this link:
http://secretaire.dorsal.polymtl.ca/~hdaoud/ext4_journal_meminfo.png

Thanks,
Houssem


_______________________________________________
Kernelnewbies mailing list
Kernelnewbies@xxxxxxxxxxxxxxxxx
http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies

Hi

Trying to help here:
You said you wanna do atomic allocation. But then you said you want to allocate around ~100 MB contiguous memory region.

IIRC, if you want to do atomic allocation, usually it can not be that big. I am not sure how large, but surely not reaching 100 MB. For that size, I think you should rely on vmalloc.

But, for clarification, maybe you should also post your full content of  /proc/buddyinfo and /proc/meminfo


--
regards,

Mulyadi Santosa
Freelance Linux trainer and consultant

blog: the-hydra.blogspot.com
training: mulyaditraining.blogspot.com
_______________________________________________
Kernelnewbies mailing list
Kernelnewbies@xxxxxxxxxxxxxxxxx
http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies

[Index of Archives]     [Newbies FAQ]     [Linux Kernel Mentors]     [Linux Kernel Development]     [IETF Annouce]     [Git]     [Networking]     [Security]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux RAID]     [Linux SCSI]     [Linux ACPI]
  Powered by Linux