Re: Question on the xfs inode slab memory

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, May 31, 2023 at 11:21:41PM -0700, Jianan Wang wrote:
> Seems the auto-wraping issue is on my gmail.... using thunderbird should be better...

Thanks!

> Resend the slabinfo and meminfo output here:
> 
> Linux # cat /proc/slabinfo
> slabinfo - version: 2.1
> # name            <active_objs> <num_objs> <objsize> <objperslab> <pagesperslab> : tunables <limit> <batchcount> <sharedfactor> : slabdata <active_slabs> <num_slabs> <sharedavail>
.....
> xfs_dqtrx              0      0    528   31    4 : tunables    0    0    0 : slabdata      0      0      0
> xfs_dquot              0      0    496   33    4 : tunables    0    0    0 : slabdata      0      0      0
> xfs_buf           2545661 3291582    384   42    4 : tunables    0    0    0 : slabdata  78371  78371      0
> xfs_rui_item           0      0    696   47    8 : tunables    0    0    0 : slabdata      0      0      0
> xfs_rud_item           0      0    176   46    2 : tunables    0    0    0 : slabdata      0      0      0
> xfs_inode         23063278 77479540   1024   32    8 : tunables    0    0    0 : slabdata 2425069 2425069      0
> xfs_efd_item        4662   4847    440   37    4 : tunables    0    0    0 : slabdata    131    131      0
> xfs_buf_item        8610   8760    272   30    2 : tunables    0    0    0 : slabdata    292    292      0
> xfs_trans           1925   1925    232   35    2 : tunables    0    0    0 : slabdata     55     55      0
> xfs_da_state        1632   1632    480   34    4 : tunables    0    0    0 : slabdata     48     48      0
> xfs_btree_cur       1728   1728    224   36    2 : tunables    0    0    0 : slabdata     48     48      0

There's no xfs_ili slab cache - this kernel must be using merged
slabs, so I'm going to have to infer how many inodes are dirty from
other slabs. The inode log item is ~190 bytes in size, so....

> skbuff_ext_cache  16454495 32746392    192   42    2 : tunables    0    0    0 : slabdata 779676 779676      0

Yup, there were - 192 byte slab, 16 million active objects. Not all
of those inodes will be dirty right now, but ~65% of the inodes
cached in memory have been dirty at some point. 

So, yes, it is highly likely that your memory reclaim/OOM problems
are caused by blocking on dirty inodes in memory reclaim, which you
can only fix by upgrading to a newer kernel.

-Dave.
-- 
Dave Chinner
david@xxxxxxxxxxxxx



[Index of Archives]     [XFS Filesystem Development (older mail)]     [Linux Filesystem Development]     [Linux Audio Users]     [Yosemite Trails]     [Linux Kernel]     [Linux RAID]     [Linux SCSI]


  Powered by Linux