On Fri, Aug 12, 2016 at 09:44:55AM +0200, Michal Hocko wrote: > > [...] > > > > > [114824.060378] Mem-Info: > > > [114824.060403] active_anon:170168 inactive_anon:170168 isolated_anon:0 > > > active_file:192892 inactive_file:133384 isolated_file:0 > > > > LRU 32% > > > > > unevictable:0 dirty:37109 writeback:1 unstable:0 > > > slab_reclaimable:1176088 slab_unreclaimable:109598 > > > > slab 61% > > > > [...] > > > > That being said it is really unusual to see such a large kernel memory > > foot print. The slab memory consumption grows but it doesn't seem to be > > a memory leak at first glance. >From discussions on #xfs, it's the ext4 inode slab that is consuming most of this memory. Which, of course, is expected when running a workload that is creating millions of lots of hardlinks. AFAICT, the difference between XFS and ext4 in this case is that XFS throttles direct reclaim to the synchronous inode reclaim rate in its custom inode cache shrinker. This is necessary because when we are dirtying large numbers of inodes, memory reclaim encounters those dirty inodes and can't reclaim them immediately. i.e. it takes IO to reclaim them, just like it does for dirty pages. However, we throttle the rate at which we dirty pages to prevent filling memory with unreclaimable dirty pages as that causes spurious OOM situations to occur. The same spurious OOM situations occur when memory is full of dirty inodes, and so allocation rate throttling is needed for large scale inode cache intersive workloads like this as well.... Cheers, Dave. -- Dave Chinner david@xxxxxxxxxxxxx -- To unsubscribe from this list: send the line "unsubscribe linux-ext4" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html