Memory reclaim and XFS objects

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Darrick,

own customers hit a system hung after several OOM conditionals.
Looking into crash dump, i found a large number threads (334) stick in xfs_reclaim_inodes_ag in waiting a mutex.
example of it.
PID: 2      TASK: ffff88de5b7c1160  CPU: 6   COMMAND: "kthreadd"
 #0 [ffff88de5b7cf488] __schedule at ffffffff81b2d932
 #1 [ffff88de5b7cf518] schedule_preempt_disabled at ffffffff81b2ed79
 #2 [ffff88de5b7cf528] __mutex_lock_slowpath at ffffffff81b2cc77
 #3 [ffff88de5b7cf580] mutex_lock at ffffffff81b2c05f
 #4 [ffff88de5b7cf598] xfs_reclaim_inodes_ag at ffffffffc0366f1c [xfs]
 #5 [ffff88de5b7cf730] xfs_reclaim_inodes_nr at ffffffffc03680c3 [xfs]
 #6 [ffff88de5b7cf750] xfs_fs_free_cached_objects at ffffffffc037a169 [xfs]
 #7 [ffff88de5b7cf760] super_cache_scan at ffffffff81640e4e
 #8 [ffff88de5b7cf7a0] shrink_slab at ffffffff815bd3f3
 #9 [ffff88de5b7cf868] shrink_zone at ffffffff815c0cb0
#10 [ffff88de5b7cf8e0] do_try_to_free_pages at ffffffff815c1230
#11 [ffff88de5b7cf988] try_to_free_pages at ffffffff815c1775
#12 [ffff88de5b7cfa18] __alloc_pages_slowpath at ffffffff81b250fd
#13 [ffff88de5b7cfb08] __alloc_pages_nodemask at ffffffff815b4d35
#14 [ffff88de5b7cfbc0] new_slab at ffffffff81611c46
#15 [ffff88de5b7cfc00] ___slab_alloc at ffffffff8161210c
#16 [ffff88de5b7cfcd0] __slab_alloc at ffffffff81b26903
#17 [ffff88de5b7cfd10] kmem_cache_alloc_node at ffffffff81612ac9
#18 [ffff88de5b7cfd60] copy_process at ffffffff81491be9
#19 [ffff88de5b7cfde8] do_fork at ffffffff81493751
#20 [ffff88de5b7cfe60] kernel_thread at ffffffff81493a06
#21 [ffff88de5b7cfe70] create_kthread at ffffffff814bf664
#22 [ffff88de5b7cfe88] kthreadd at ffffffff814c0315 
# grep -c xfs_reclaim_inodes_nr bt.log
334

Looking in code, it looks result of race between different threads who want to do reclaim in same time, 
but each thread don’t able to flush own nr objects limit.
I think xfs_reclaim_inodes_nr don’t need to have SYNC_WAIT as shrink_slab have own loop to flush additional objects in case first loop ins’t flush all.
What you think about this change?



Thanks for response,
Alex



[Index of Archives]     [XFS Filesystem Development (older mail)]     [Linux Filesystem Development]     [Linux Audio Users]     [Yosemite Trails]     [Linux Kernel]     [Linux RAID]     [Linux SCSI]


  Powered by Linux