Hi Dave,
Thank you for your response.
We did some more investigations on the issue, and we have the following
findings:
1) We tracked the max amount of inodes per AG radix tree. We found in our
tests, that the max amount of inodes per AG radix tree was about 1.5M:
[xfs_reclaim_inodes_ag:1285] XFS(dm-79): AG[1368]: count=1384662
reclaimable=58
[xfs_reclaim_inodes_ag:1285] XFS(dm-79): AG[1368]: count=1384630
reclaimable=46
[xfs_reclaim_inodes_ag:1285] XFS(dm-79): AG[1368]: count=1384600
reclaimable=16
[xfs_reclaim_inodes_ag:1285] XFS(dm-79): AG[1370]: count=1594500
reclaimable=75
[xfs_reclaim_inodes_ag:1285] XFS(dm-79): AG[1370]: count=1594468
reclaimable=55
[xfs_reclaim_inodes_ag:1285] XFS(dm-79): AG[1370]: count=1594436
reclaimable=46
[xfs_reclaim_inodes_ag:1285] XFS(dm-79): AG[1370]: count=1594421
reclaimable=42
(but the amount of reclaimable inodes is very small, as you can see).
Do you think this number is reasonable per radix tree?
2) This particular XFS instance is total of 500TB. However, the AG size in
this case is 100GB. This is the AG size that we use, due to issues that we
reported in https://www.spinics.net/lists/linux-xfs/msg06501.html,
where the "near" allocation algorithm was stuck for a long time scanning the
free-space btrees. With smaller AG size, we don't see such issues.
But with 500TB filesystem, we now have 5000 AGs. As a result, we suspect
(due to some instrumentation), that the looping over 5000 AGs in
xfs_reclaim_inodes_ag() is what is causing the RCU stall for us. Although
the code has cond_resched() call, but somehow the RCU stall still happens,
and it always happens in this function, while searching the radix tree.
Thanks,
Alex.
-----Original Message-----
From: Dave Chinner
Sent: Monday, November 16, 2020 11:30 PM
To: Alex Lyakas
Cc: linux-xfs@xxxxxxxxxxxxxxx
Subject: Re: RCU stall in xfs_reclaim_inodes_ag
On Mon, Nov 16, 2020 at 07:45:46PM +0200, Alex Lyakas wrote:
Greetings XFS community,
We had an RCU stall [1]. According to the code, it happened in
radix_tree_gang_lookup_tag():
rcu_read_lock();
nr_found = radix_tree_gang_lookup_tag(
&pag->pag_ici_root,
(void **)batch, first_index,
XFS_LOOKUP_BATCH,
XFS_ICI_RECLAIM_TAG);
This XFS system has over 100M files. So perhaps looping inside the radix
tree took too long, and it was happening in RCU read-side critical
seciton.
This is one of the possible causes for RCU stall.
Doubt it. According to the trace it was stalled for 60s, and a
radix tree walk of 100M entries only takes a second or two.
Further, unless you are using inode32, the inodes will be spread
across multiple radix trees and that makes the radix trees much
smaller and even less likely to take this long to run a traversal.
This could be made a little more efficient by adding a "last index"
parameter to tell the search where to stop (i.e. if the batch count
has not yet been reached), but in general that makes little
difference to the search because the radix tree walk finds the next
inodes in a few pointer chases...
This happened in kernel 4.14.99, but looking at latest mainline code, code
is still the same.
These inode radix trees have been used in XFS since 2008, and this
is the first time anyone has reported a stall like this, so I'm
doubtful that there is actually a general bug. My suspicion for such
a rare occurrence would be memory corruption of some kind or a
leaked atomic/rcu state in some other code on that CPU....
Can anyone please advise how to address that? It is not possible to put
cond_resched() inside the radix tree code, because it can be used with
spinlocks, and perhaps other contexts where sleeping is not allowed.
I don't think there is a solution to this problem - it just
shouldn't happen in when everything is operating normally as it's
just a tag search on an indexed tree.
Hence even if there was a hack to stop stall warnings, it won't fix
whatever problem is leading to the rcu stall. The system will then
just spin burning CPU, and eventually something else will fail.
IOWs, unless you can reproduce this stall and find out what is wrong
in the radix tree that is leading to it looping forever, there's
likely nothing we can do to avoid this.
Cheers,
Dave.
--
Dave Chinner
david@xxxxxxxxxxxxx