[PATCH] xarray: rename xa_lock/xa_unlock to xa_enter/xa_leave

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Functions such as __xa_store() may temporarily unlock the internal
spinlock if allocation is necessary. This means that code such as

	xa_lock(xa);
	__xa_store(xa, idx1, ptr1, GFP_KERNEL);
	__xa_store(xa, idx2, ptr2, GFP_KERNEL);
	xa_unlock(xa);

does not prevent the situation where another thread sees the first store
without seeing the second store. Even if GFP_ATOMIC is used, this can
still happen if the reader uses xa_load without taking the xa_lock. This
is not the behavior that you would expect from a lock. We should not
subvert the expectations of the reader.

Thus, rename xa_lock/xa_unlock to xa_enter/xa_leave. Users of the XArray
will have fewer implicit expectations about how functions with these
names will behave, which encourages users to check the documentation.
The documentation is amended with additional notes about these caveats.

The previous example becomes:

	xa_enter(xa);
	__xa_store(xa, idx1, ptr1, GFP_KERNEL);
	__xa_store(xa, idx2, ptr2, GFP_KERNEL);
	xa_leave(xa);

Existing users of the function will be updated to use the new name in
follow-up patches. The old names will be deleted later to avoid
conflicts with new code using xa_lock().

The idea to rename these functions came up during a discussion at
the Linux Plumbers conference 2024. I was working on a Rust API for
using the XArray from Rust code, but I was dissatisfied with the Rust
API being too confusing for the same reasons as outlined above.

Signed-off-by: Alice Ryhl <aliceryhl@xxxxxxxxxx>
---
 Documentation/core-api/xarray.rst |  75 ++++++++++++-------
 include/linux/xarray.h            | 148 +++++++++++++++++++++++---------------
 2 files changed, 141 insertions(+), 82 deletions(-)

diff --git a/Documentation/core-api/xarray.rst b/Documentation/core-api/xarray.rst
index 77e0ece2b1d6..2f3546cc2db2 100644
--- a/Documentation/core-api/xarray.rst
+++ b/Documentation/core-api/xarray.rst
@@ -200,7 +200,7 @@ Takes RCU read lock:
  * xa_extract()
  * xa_get_mark()
 
-Takes xa_lock internally:
+Internally calls xa_enter to lock spinlock:
  * xa_store()
  * xa_store_bh()
  * xa_store_irq()
@@ -224,7 +224,7 @@ Takes xa_lock internally:
  * xa_set_mark()
  * xa_clear_mark()
 
-Assumes xa_lock held on entry:
+Caller must use xa_enter:
  * __xa_store()
  * __xa_insert()
  * __xa_erase()
@@ -233,14 +233,41 @@ Assumes xa_lock held on entry:
  * __xa_set_mark()
  * __xa_clear_mark()
 
+Variants of xa_enter and xa_leave:
+ * xa_enter()
+ * xa_tryenter()
+ * xa_enter_bh()
+ * xa_enter_irq()
+ * xa_enter_irqsave()
+ * xa_enter_nested()
+ * xa_enter_bh_nested()
+ * xa_enter_irq_nested()
+ * xa_enter_irqsave_nested()
+ * xa_leave()
+ * xa_leave_bh()
+ * xa_leave_irq()
+ * xa_leave_irqrestore()
+
+The xa_enter() and xa_leave() functions correspond to spin_lock() and
+spin_unlock() on the internal spinlock.  Be aware that functions such as
+__xa_store() may temporarily unlock the internal spinlock to allocate memory.
+Because of that, if you have several calls to __xa_store() between a single
+xa_enter()/xa_leave() pair, other users of the XArray may see the first store
+without seeing the second store.  The xa_enter() function is not called xa_lock()
+to emphasize this distinction.
+
-If you want to take advantage of the lock to protect the data structures
-that you are storing in the XArray, you can call xa_lock()
-before calling xa_load(), then take a reference count on the
-object you have found before calling xa_unlock().  This will
-prevent stores from removing the object from the array between looking
-up the object and incrementing the refcount.  You can also use RCU to
-avoid dereferencing freed memory, but an explanation of that is beyond
-the scope of this document.
+If you want to take advantage of the lock to protect the data stored in the
+XArray, then you can use xa_enter() and xa_leave() to enter and leave the
+critical region of the internal spinlock. For example, you can enter the critcal
+region with xa_enter(), look up a value with xa_load(), increment the refcount,
+and then call xa_leave().  This will prevent stores from removing the object
+from the array between looking up the object and incrementing the refcount.
+
+Instead of xa_enter(), you can also use RCU to increment the refcount, but an
+explanation of that is beyond the scope of this document.
+
+Interrupts
+----------
 
 The XArray does not disable interrupts or softirqs while modifying
 the array.  It is safe to read the XArray from interrupt or softirq
@@ -258,21 +285,21 @@ context and then erase them in softirq context, you can do that this way::
     {
         int err;
 
-        xa_lock_bh(&foo->array);
+        xa_enter_bh(&foo->array);
         err = xa_err(__xa_store(&foo->array, index, entry, GFP_KERNEL));
         if (!err)
             foo->count++;
-        xa_unlock_bh(&foo->array);
+        xa_leave_bh(&foo->array);
         return err;
     }
 
     /* foo_erase() is only called from softirq context */
     void foo_erase(struct foo *foo, unsigned long index)
     {
-        xa_lock(&foo->array);
+        xa_enter(&foo->array);
         __xa_erase(&foo->array, index);
         foo->count--;
-        xa_unlock(&foo->array);
+        xa_leave(&foo->array);
     }
 
 If you are going to modify the XArray from interrupt or softirq context,
@@ -280,12 +307,12 @@ you need to initialise the array using xa_init_flags(), passing
 ``XA_FLAGS_LOCK_IRQ`` or ``XA_FLAGS_LOCK_BH``.
 
 The above example also shows a common pattern of wanting to extend the
-coverage of the xa_lock on the store side to protect some statistics
-associated with the array.
+coverage of the internal spinlock on the store side to protect some
+statistics associated with the array.
 
 Sharing the XArray with interrupt context is also possible, either
-using xa_lock_irqsave() in both the interrupt handler and process
-context, or xa_lock_irq() in process context and xa_lock()
+using xa_enter_irqsave() in both the interrupt handler and process
+context, or xa_enter_irq() in process context and xa_enter()
 in the interrupt handler.  Some of the more common patterns have helper
 functions such as xa_store_bh(), xa_store_irq(),
 xa_erase_bh(), xa_erase_irq(), xa_cmpxchg_bh()
@@ -293,8 +320,8 @@ and xa_cmpxchg_irq().
 
 Sometimes you need to protect access to the XArray with a mutex because
 that lock sits above another mutex in the locking hierarchy.  That does
-not entitle you to use functions like __xa_erase() without taking
-the xa_lock; the xa_lock is used for lockdep validation and will be used
+not entitle you to use functions like __xa_erase() without calling
+xa_enter; the XArray lock is used for lockdep validation and will be used
 for other purposes in the future.
 
 The __xa_set_mark() and __xa_clear_mark() functions are also
@@ -308,8 +335,8 @@ Advanced API
 The advanced API offers more flexibility and better performance at the
 cost of an interface which can be harder to use and has fewer safeguards.
 No locking is done for you by the advanced API, and you are required
-to use the xa_lock while modifying the array.  You can choose whether
-to use the xa_lock or the RCU lock while doing read-only operations on
+to use xa_enter while modifying the array.  You can choose whether
+to use xa_enter or the RCU lock while doing read-only operations on
 the array.  You can mix advanced and normal operations on the same array;
 indeed the normal API is implemented in terms of the advanced API.  The
 advanced API is only available to modules with a GPL-compatible license.
@@ -320,8 +347,8 @@ This macro initialises the xa_state ready to start walking around the
 XArray.  It is used as a cursor to maintain the position in the XArray
 and let you compose various operations together without having to restart
 from the top every time.  The contents of the xa_state are protected by
-the rcu_read_lock() or the xas_lock().  If you need to drop whichever of
-those locks is protecting your state and tree, you must call xas_pause()
+the rcu_read_lock() or the xas_enter() lock.  If you need to drop whichever
+of those locks is protecting your state and tree, you must call xas_pause()
 so that future calls do not rely on the parts of the state which were
 left unprotected.
 
diff --git a/include/linux/xarray.h b/include/linux/xarray.h
index 0b618ec04115..dde10de4e6bf 100644
--- a/include/linux/xarray.h
+++ b/include/linux/xarray.h
@@ -532,29 +532,48 @@ static inline bool xa_marked(const struct xarray *xa, xa_mark_t mark)
 	for (index = 0, entry = xa_find(xa, &index, ULONG_MAX, filter); \
 	     entry; entry = xa_find_after(xa, &index, ULONG_MAX, filter))
 
-#define xa_trylock(xa)		spin_trylock(&(xa)->xa_lock)
-#define xa_lock(xa)		spin_lock(&(xa)->xa_lock)
-#define xa_unlock(xa)		spin_unlock(&(xa)->xa_lock)
-#define xa_lock_bh(xa)		spin_lock_bh(&(xa)->xa_lock)
-#define xa_unlock_bh(xa)	spin_unlock_bh(&(xa)->xa_lock)
-#define xa_lock_irq(xa)		spin_lock_irq(&(xa)->xa_lock)
-#define xa_unlock_irq(xa)	spin_unlock_irq(&(xa)->xa_lock)
-#define xa_lock_irqsave(xa, flags) \
+#define xa_tryenter(xa)		spin_trylock(&(xa)->xa_lock)
+#define xa_enter(xa)		spin_lock(&(xa)->xa_lock)
+#define xa_leave(xa)		spin_unlock(&(xa)->xa_lock)
+#define xa_enter_bh(xa)		spin_lock_bh(&(xa)->xa_lock)
+#define xa_leave_bh(xa)		spin_unlock_bh(&(xa)->xa_lock)
+#define xa_enter_irq(xa)	spin_lock_irq(&(xa)->xa_lock)
+#define xa_leave_irq(xa)	spin_unlock_irq(&(xa)->xa_lock)
+#define xa_enter_irqsave(xa, flags) \
 				spin_lock_irqsave(&(xa)->xa_lock, flags)
-#define xa_unlock_irqrestore(xa, flags) \
+#define xa_leave_irqrestore(xa, flags) \
 				spin_unlock_irqrestore(&(xa)->xa_lock, flags)
-#define xa_lock_nested(xa, subclass) \
+#define xa_enter_nested(xa, subclass) \
 				spin_lock_nested(&(xa)->xa_lock, subclass)
-#define xa_lock_bh_nested(xa, subclass) \
+#define xa_enter_bh_nested(xa, subclass) \
 				spin_lock_bh_nested(&(xa)->xa_lock, subclass)
-#define xa_lock_irq_nested(xa, subclass) \
+#define xa_enter_irq_nested(xa, subclass) \
 				spin_lock_irq_nested(&(xa)->xa_lock, subclass)
-#define xa_lock_irqsave_nested(xa, flags, subclass) \
+#define xa_enter_irqsave_nested(xa, flags, subclass) \
 		spin_lock_irqsave_nested(&(xa)->xa_lock, flags, subclass)
 
+/*
+ * These names are deprecated. Please use xa_enter instead of xa_lock, and
+ * xa_leave instead of xa_unlock.
+ */
+#define xa_trylock(xa)			xa_tryenter(xa)
+#define xa_lock(xa)			xa_enter(xa)
+#define xa_unlock(xa)			xa_leave(xa)
+#define xa_lock_bh(xa)			xa_enter_bh(xa)
+#define xa_unlock_bh(xa)		xa_leave_bh(xa)
+#define xa_lock_irq(xa)			xa_enter_irq(xa)
+#define xa_unlock_irq(xa)		xa_leave_irq(xa)
+#define xa_lock_irqsave(xa, flags)	xa_enter_irqsave(xa, flags)
+#define xa_unlock_irqrestore(xa, flags) xa_leave_irqsave(xa, flags)
+#define xa_lock_nested(xa, subclass)	xa_enter_nested(xa, subclass)
+#define xa_lock_bh_nested(xa, subclass) xa_enter_bh_nested(xa, subclass)
+#define xa_lock_irq_nested(xa, subclass) xa_enter_irq_nested(xa, subclass)
+#define xa_lock_irqsave_nested(xa, flags, subclass) \
+		xa_enter_irqsave_nested(xa, flags, subclass)
+
 /*
- * Versions of the normal API which require the caller to hold the
- * xa_lock.  If the GFP flags allow it, they will drop the lock to
+ * Versions of the normal API which require the caller to use
+ * xa_enter.  If the GFP flags allow it, they will drop the lock to
  * allocate memory, then reacquire it afterwards.  These functions
  * may also re-enable interrupts if the XArray flags indicate the
  * locking should be interrupt safe.
@@ -592,9 +611,9 @@ static inline void *xa_store_bh(struct xarray *xa, unsigned long index,
 	void *curr;
 
 	might_alloc(gfp);
-	xa_lock_bh(xa);
+	xa_enter_bh(xa);
 	curr = __xa_store(xa, index, entry, gfp);
-	xa_unlock_bh(xa);
+	xa_leave_bh(xa);
 
 	return curr;
 }
@@ -619,9 +638,9 @@ static inline void *xa_store_irq(struct xarray *xa, unsigned long index,
 	void *curr;
 
 	might_alloc(gfp);
-	xa_lock_irq(xa);
+	xa_enter_irq(xa);
 	curr = __xa_store(xa, index, entry, gfp);
-	xa_unlock_irq(xa);
+	xa_leave_irq(xa);
 
 	return curr;
 }
@@ -643,9 +662,9 @@ static inline void *xa_erase_bh(struct xarray *xa, unsigned long index)
 {
 	void *entry;
 
-	xa_lock_bh(xa);
+	xa_enter_bh(xa);
 	entry = __xa_erase(xa, index);
-	xa_unlock_bh(xa);
+	xa_leave_bh(xa);
 
 	return entry;
 }
@@ -667,9 +686,9 @@ static inline void *xa_erase_irq(struct xarray *xa, unsigned long index)
 {
 	void *entry;
 
-	xa_lock_irq(xa);
+	xa_enter_irq(xa);
 	entry = __xa_erase(xa, index);
-	xa_unlock_irq(xa);
+	xa_leave_irq(xa);
 
 	return entry;
 }
@@ -695,9 +714,9 @@ static inline void *xa_cmpxchg(struct xarray *xa, unsigned long index,
 	void *curr;
 
 	might_alloc(gfp);
-	xa_lock(xa);
+	xa_enter(xa);
 	curr = __xa_cmpxchg(xa, index, old, entry, gfp);
-	xa_unlock(xa);
+	xa_leave(xa);
 
 	return curr;
 }
@@ -723,9 +742,9 @@ static inline void *xa_cmpxchg_bh(struct xarray *xa, unsigned long index,
 	void *curr;
 
 	might_alloc(gfp);
-	xa_lock_bh(xa);
+	xa_enter_bh(xa);
 	curr = __xa_cmpxchg(xa, index, old, entry, gfp);
-	xa_unlock_bh(xa);
+	xa_leave_bh(xa);
 
 	return curr;
 }
@@ -751,9 +770,9 @@ static inline void *xa_cmpxchg_irq(struct xarray *xa, unsigned long index,
 	void *curr;
 
 	might_alloc(gfp);
-	xa_lock_irq(xa);
+	xa_enter_irq(xa);
 	curr = __xa_cmpxchg(xa, index, old, entry, gfp);
-	xa_unlock_irq(xa);
+	xa_leave_irq(xa);
 
 	return curr;
 }
@@ -781,9 +800,9 @@ static inline int __must_check xa_insert(struct xarray *xa,
 	int err;
 
 	might_alloc(gfp);
-	xa_lock(xa);
+	xa_enter(xa);
 	err = __xa_insert(xa, index, entry, gfp);
-	xa_unlock(xa);
+	xa_leave(xa);
 
 	return err;
 }
@@ -811,9 +830,9 @@ static inline int __must_check xa_insert_bh(struct xarray *xa,
 	int err;
 
 	might_alloc(gfp);
-	xa_lock_bh(xa);
+	xa_enter_bh(xa);
 	err = __xa_insert(xa, index, entry, gfp);
-	xa_unlock_bh(xa);
+	xa_leave_bh(xa);
 
 	return err;
 }
@@ -841,9 +860,9 @@ static inline int __must_check xa_insert_irq(struct xarray *xa,
 	int err;
 
 	might_alloc(gfp);
-	xa_lock_irq(xa);
+	xa_enter_irq(xa);
 	err = __xa_insert(xa, index, entry, gfp);
-	xa_unlock_irq(xa);
+	xa_leave_irq(xa);
 
 	return err;
 }
@@ -874,9 +893,9 @@ static inline __must_check int xa_alloc(struct xarray *xa, u32 *id,
 	int err;
 
 	might_alloc(gfp);
-	xa_lock(xa);
+	xa_enter(xa);
 	err = __xa_alloc(xa, id, entry, limit, gfp);
-	xa_unlock(xa);
+	xa_leave(xa);
 
 	return err;
 }
@@ -907,9 +926,9 @@ static inline int __must_check xa_alloc_bh(struct xarray *xa, u32 *id,
 	int err;
 
 	might_alloc(gfp);
-	xa_lock_bh(xa);
+	xa_enter_bh(xa);
 	err = __xa_alloc(xa, id, entry, limit, gfp);
-	xa_unlock_bh(xa);
+	xa_leave_bh(xa);
 
 	return err;
 }
@@ -940,9 +959,9 @@ static inline int __must_check xa_alloc_irq(struct xarray *xa, u32 *id,
 	int err;
 
 	might_alloc(gfp);
-	xa_lock_irq(xa);
+	xa_enter_irq(xa);
 	err = __xa_alloc(xa, id, entry, limit, gfp);
-	xa_unlock_irq(xa);
+	xa_leave_irq(xa);
 
 	return err;
 }
@@ -977,9 +996,9 @@ static inline int xa_alloc_cyclic(struct xarray *xa, u32 *id, void *entry,
 	int err;
 
 	might_alloc(gfp);
-	xa_lock(xa);
+	xa_enter(xa);
 	err = __xa_alloc_cyclic(xa, id, entry, limit, next, gfp);
-	xa_unlock(xa);
+	xa_leave(xa);
 
 	return err;
 }
@@ -1014,9 +1033,9 @@ static inline int xa_alloc_cyclic_bh(struct xarray *xa, u32 *id, void *entry,
 	int err;
 
 	might_alloc(gfp);
-	xa_lock_bh(xa);
+	xa_enter_bh(xa);
 	err = __xa_alloc_cyclic(xa, id, entry, limit, next, gfp);
-	xa_unlock_bh(xa);
+	xa_leave_bh(xa);
 
 	return err;
 }
@@ -1051,9 +1070,9 @@ static inline int xa_alloc_cyclic_irq(struct xarray *xa, u32 *id, void *entry,
 	int err;
 
 	might_alloc(gfp);
-	xa_lock_irq(xa);
+	xa_enter_irq(xa);
 	err = __xa_alloc_cyclic(xa, id, entry, limit, next, gfp);
-	xa_unlock_irq(xa);
+	xa_leave_irq(xa);
 
 	return err;
 }
@@ -1408,17 +1427,30 @@ struct xa_state {
 			(1U << (order % XA_CHUNK_SHIFT)) - 1)
 
 #define xas_marked(xas, mark)	xa_marked((xas)->xa, (mark))
-#define xas_trylock(xas)	xa_trylock((xas)->xa)
-#define xas_lock(xas)		xa_lock((xas)->xa)
-#define xas_unlock(xas)		xa_unlock((xas)->xa)
-#define xas_lock_bh(xas)	xa_lock_bh((xas)->xa)
-#define xas_unlock_bh(xas)	xa_unlock_bh((xas)->xa)
-#define xas_lock_irq(xas)	xa_lock_irq((xas)->xa)
-#define xas_unlock_irq(xas)	xa_unlock_irq((xas)->xa)
-#define xas_lock_irqsave(xas, flags) \
-				xa_lock_irqsave((xas)->xa, flags)
-#define xas_unlock_irqrestore(xas, flags) \
-				xa_unlock_irqrestore((xas)->xa, flags)
+#define xas_tryenter(xas)	xa_tryenter((xas)->xa)
+#define xas_enter(xas)		xa_enter((xas)->xa)
+#define xas_leave(xas)		xa_leave((xas)->xa)
+#define xas_enter_bh(xas)	xa_enter_bh((xas)->xa)
+#define xas_leave_bh(xas)	xa_leave_bh((xas)->xa)
+#define xas_enter_irq(xas)	xa_enter_irq((xas)->xa)
+#define xas_leave_irq(xas)	xa_leave_irq((xas)->xa)
+#define xas_enter_irqsave(xas, flags) xa_enter_irqsave((xas)->xa, flags)
+#define xas_leave_irqrestore(xas, flags) xa_leave_irqrestore((xas)->xa, flags)
+
+
+/*
+ * These names are deprecated. Please use xas_enter instead of xas_lock, and
+ * xas_leave instead of xas_unlock.
+ */
+#define xas_trylock(xas)			xas_tryenter(xas)
+#define xas_lock(xas)				xas_enter(xas)
+#define xas_unlock(xas)				xas_leave(xas)
+#define xas_lock_bh(xas)			xas_enter_bh(xas)
+#define xas_unlock_bh(xas)			xas_leave_bh(xas)
+#define xas_lock_irq(xas)			xas_enter_irq(xas)
+#define xas_unlock_irq(xas)			xas_leave_irq(xas)
+#define xas_lock_irqsave(xas, flags)		xas_enter_irqsave(xas, flags)
+#define xas_unlock_irqrestore(xas, flags)	xas_leave_irqsave(xas, flags)
 
 /**
  * xas_error() - Return an errno stored in the xa_state.

---
base-commit: 98f7e32f20d28ec452afb208f9cffc08448a2652
change-id: 20240921-xa_enter_leave-b11552c3caa2

Best regards,
-- 
Alice Ryhl <aliceryhl@xxxxxxxxxx>





[Index of Archives]     [Kernel Newbies]     [Security]     [Netfilter]     [Bugtraq]     [Linux FS]     [Yosemite Forum]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Samba]     [Video 4 Linux]     [Device Mapper]     [Linux Resources]

  Powered by Linux