On Wed 13-07-16 15:19:05, Michal Hocko wrote: > [CC ext/jbd experts] Thanks. > On Wed 13-07-16 01:48:57, Houssem Daoud wrote: > > Hi, > > > > I was testing the filesystem performance of my system using the following > > script: > > > > #!/bin/bash > > while true; > > do > > dd if=/dev/zero of=output.dat bs=100M count=1 > > done > > > > I noticed that after some time, all the physical memory is consumed by the > > LRU inactive list and only 120 MB are left to the system. > > /proc/meminfo shows the following information: > > MemTotal: 4021820 Kb > > MemFree: 121912 Kb > > Active: 1304396 Kb > > Inactive: 2377124 Kb > > > > The evolution of memory utilization over time is available in this link: > > http://secretaire.dorsal.polymtl.ca/~hdaoud/ext4_journal_meminfo.png > > > > With the help of a kernel tracer, I found that most of the pages in the > > inactive list are created by the ext4 journal during a truncate operation. > > The call stack of the allocation is: > > [ > > __alloc_pages_nodemask > > alloc_pages_current > > __page_cache_alloc > > find_or_create_page > > __getblk > > jbd2_journal_get_descriptor_buffer > > jbd2_journal_commit_transaction > > kjournald2 > > kthread > > ] > > > > I can't find an explanation why the LRU is growing while we are just writing > > to the same file again and again. I know that the philosophy of memory > > management in Linux is to use the available memory as much as possible, but > > what is the need of keeping truncated pages in the LRU if we know that they > > are not even accessible ? > > > > Thanks ! > > > > ps: My system is running kernel 4.3 with ext4 filesystem (journal mode) This problem should be fixed by commit bc23f0c8d7cc "jbd2: Fix unreclaimed pages after truncate in data=journal mode" which was merged into 4.4. Honza -- Jan Kara <jack@xxxxxxxx> SUSE Labs, CR -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>