On Wed, May 31, 2023 at 02:29:52PM -0700, Jianan Wang wrote: > Hi all, > > I have a question regarding the xfs slab memory usage when operating a > filesystem with 1-2 billion inodes (raid 0 with 6 disks, totally > 18TB). On this partition, whenever there is a high disk io operation, > like removing millions of small files, the slab kernel memory usage > will increase a lot, leading to many OOM issues happening for the > services running on this node. You could check some of the stats as > the following (only includes the xfs related): You didn't include all the XFS related slabs. At minimum, the inode log item slab needs to be shown (xfs_ili) because that tells us how many of the inodes in the cache have been dirtied. As it is, I'm betting the problem is the disk subsystem can't write back dirty inodes fast enough to keep up with memory demand and so reclaim is declaring OOM faster than your disks can clean inodes to enable them to be reclaimed. > ######################################################################### > Active / Total Objects (% used): 281803052 / 317485764 (88.8%) > Active / Total Slabs (% used): 13033144 / 13033144 (100.0%) > Active / Total Caches (% used): 126 / 180 (70.0%) > Active / Total Size (% used): 114671057.99K / 127265108.19K (90.1%) > Minium / Average / Maximum Object : 0.01K / 0.40K / 16.75K > > OBJS ACTIVE USE OBJ SIZE SLABS > OBJ/SLAB CACHE SIZE NAME > 78207920 70947541 0% 1.00K 7731010 > 32 247392320K xfs_inode > 59945928 46548798 0% 0.19K 1433102 > 42 11464816K dentry > 25051296 25051282 0% 0.38K 599680 > 42 9594880K xfs_buf Ok, that's from slabtop. Please don't autowrap stuff you've pasted in - it makes it really hard to read. (reformatted so I can read it). OBJS ACTIVE USE OBJ SIZE SLABS OBJ/SLAB CACHE SIZE NAME 78207920 70947541 0% 1.00K 7731010 32 247392320K xfs_inode 59945928 46548798 0% 0.19K 1433102 42 11464816K dentry 25051296 25051282 0% 0.38K 599680 42 9594880K xfs_buf So, 70 million cached inodes, with a cache size of 240GB. There are 7.7 million slabs, 32 objects per slab, and that's roughly 240GB. But why does the slab report only 78 million objects in the slab when at 240GB there should be 240 million objects in the slab? It looks like theres some kind of accounting problem here, likely in the slabtop program. I have always found slabtop to be unreliable like this.... Can you attach the output of 'cat /proc/slabinfo' and 'cat /proc/meminfo' when you have a large slab cache in memory? > ######################################################################### > > The peak slab memory usage could spike all the way to 100GB+. Is that all? :) > We are using Ubuntu 18.04 and the xfs version is 4.9, kernel version is 5.4 Ah, I don't think there's anything upstream can do for you. We rewrote large portions of the XFS inode reclaim in 5.9 (3 years ago) to address the issues with memory reclaim getting stuck on dirty XFS inodes, so inode reclaim behaviour in modern kernels is completely different to old kernels. I'd suggest that you need to upgrade your systems to run a more modern kernel and see if that fixes the issues you are seeing... Cheers, Dave. -- Dave Chinner david@xxxxxxxxxxxxx