[STABLE-PATCH] xfs: Correctly invert xfs_buftarg LRU isolation logic

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



From: Vratislav Bendel <vbendel@xxxxxxxxxx>

[upstream commit 19957a181608d25c8f4136652d0ea00b3738972d]

Due to an inverted logic mistake in xfs_buftarg_isolate()
the xfs_buffers with zero b_lru_ref will take another trip
around LRU, while isolating buffers with non-zero b_lru_ref.

Additionally those isolated buffers end up right back on the LRU
once they are released, because b_lru_ref remains elevated.

Fix that circuitous route by leaving them on the LRU
as originally intended.

[Additional description for the issue]

Due to this issue, buffers will spend one cycle less in
the LRU than intended. If we initialize b_lru_ref to X, we intend the
buffer to survive X shrinker calls, and on the X+1'th call to be taken
off the LRU (and maybe freed). But with this issue, the buffer will be
taken off the LRU and immediately re-added back. But this will happen
X-1 times, because on the X'th time the b_lru_ref will be 0, and the
buffer will not be re-added to the LRU. So the buffer will survive X-1
shrinker calls and not X as intended.

Furthermore, if somehow we end up with the buffer sitting on the LRU
and having b_lru_ref==0, this buffer will never be taken off the LRU,
due to the bug. Not sure that this can happen, because by default
b_lru_ref is set to 1.

This issue existed since the introduction of lru in XFS buffer cache
in commit
"430cbeb86fdcbbdabea7d4aa65307de8de425350 xfs: add a lru to the XFS buffer cache".

However, the integration with the "list_lru" insfrastructure was done in kernel 3.12,
in commit
"e80dfa19976b884db1ac2bc5d7d6ca0a4027bd1c xfs: convert buftarg LRU to generic code"

Therefore this patch is relevant for all kernels from 3.12 to 4.15
(upstream fix was made in 4.16).

Signed-off-by: Alex Lyakas <alex@xxxxxxxxxx>
Signed-off-by: Vratislav Bendel <vbendel@xxxxxxxxxx>
Reviewed-by: Brian Foster <bfoster@xxxxxxxxxx>
Reviewed-by: Christoph Hellwig <hch@xxxxxx>
Reviewed-by: Darrick J. Wong <darrick.wong@xxxxxxxxxx>
Signed-off-by: Darrick J. Wong <darrick.wong@xxxxxxxxxx>
(cherry picked from commit 19957a181608d25c8f4136652d0ea00b3738972d)
---
 fs/xfs/xfs_buf.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/fs/xfs/xfs_buf.c b/fs/xfs/xfs_buf.c
index 16f93d7..e4a6239 100644
--- a/fs/xfs/xfs_buf.c
+++ b/fs/xfs/xfs_buf.c
@@ -1702,7 +1702,7 @@ struct xfs_buf *
 	 * zero. If the value is already zero, we need to reclaim the
 	 * buffer, otherwise it gets another trip through the LRU.
 	 */
-	if (!atomic_add_unless(&bp->b_lru_ref, -1, 0)) {
+	if (atomic_add_unless(&bp->b_lru_ref, -1, 0)) {
 		spin_unlock(&bp->b_lock);
 		return LRU_ROTATE;
 	}
-- 
1.9.1




[Index of Archives]     [XFS Filesystem Development (older mail)]     [Linux Filesystem Development]     [Linux Audio Users]     [Yosemite Trails]     [Linux Kernel]     [Linux RAID]     [Linux SCSI]


  Powered by Linux