[PATCH] libfs: Add a lock class for the offset map's xa_lock

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



From: Chuck Lever <chuck.lever@xxxxxxxxxx>

Tie the dynamically-allocated xarray locks into a single class so
contention on the directory offset xarrays can be observed.

Signed-off-by: Chuck Lever <chuck.lever@xxxxxxxxxx>
---
 fs/libfs.c |    3 +++
 1 file changed, 3 insertions(+)

I've been looking into the recent kernel bot reports of performance
regressions on the will-it-scale benchmark.

https://lore.kernel.org/linux-mm/202307171640.e299f8d5-oliver.sang@xxxxxxxxx/

I haven't been able to run the reproducer yet, but I have created a
small change to demonstrate that it is unlikely that it is the
xa_lock itself that is the issue. All tests I've run here show "0.0"
in the lock_stat contention metrics for the simple_offset_xa_lock
class.

It seems reasonable to include this small change in the patches
already applied to your tree.


diff --git a/fs/libfs.c b/fs/libfs.c
index 68b0000dc518..fcc0f1f3c2dc 100644
--- a/fs/libfs.c
+++ b/fs/libfs.c
@@ -249,6 +249,8 @@ static unsigned long dentry2offset(struct dentry *dentry)
 	return (unsigned long)dentry->d_fsdata;
 }
 
+static struct lock_class_key simple_offset_xa_lock;
+
 /**
  * simple_offset_init - initialize an offset_ctx
  * @octx: directory offset map to be initialized
@@ -257,6 +259,7 @@ static unsigned long dentry2offset(struct dentry *dentry)
 void simple_offset_init(struct offset_ctx *octx)
 {
 	xa_init_flags(&octx->xa, XA_FLAGS_ALLOC1);
+	lockdep_set_class(&octx->xa.xa_lock, &simple_offset_xa_lock);
 
 	/* 0 is '.', 1 is '..', so always start with offset 2 */
 	octx->next_offset = 2;






[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux