I have seen several cases of attempts to use mutex_unlock() to release an
object such that the object can then be freed by another task.
This is not safe because mutex_unlock(), in the
MUTEX_FLAG_WAITERS && !MUTEX_FLAG_HANDOFF case, accesses the mutex
structure after having marked it as unlocked; so mutex_unlock() requires
its caller to ensure that the mutex stays alive until mutex_unlock()
returns.
If MUTEX_FLAG_WAITERS is set and there are real waiters, those waiters
have to keep the mutex alive, but we could have a spurious
MUTEX_FLAG_WAITERS left if an interruptible/killable waiter bailed
between the points where __mutex_unlock_slowpath() did the cmpxchg
reading the flags and where it acquired the wait_lock.
( With spinlocks, that kind of code pattern is allowed and, from what I
remember, used in several places in the kernel. )
Document this, such a semantic difference between mutexes and spinlocks
is fairly unintuitive. Based on feedback on the list, this should be
documented as a general locking caveat, not as a mutex-specific thing.
(changelog with some input from mingo)
Signed-off-by: Jann Horn <jannh@xxxxxxxxxx>
---
Based on feedback on the list, I've gotten rid of the confusing
"atomic" wording.
Also, based on Peter Zijlstra's feedback that this more of a general
thing with sleeping locks and not specific to mutexes, I have rewritten
the patch to have some central documentation on the caveat in
Documentation/locking/locktypes.rst, and then just sprinkle some
references to that in a few other places.
I saw that the first version of this patch already landed in tip tree;
can you still yank that back out of the tree? If not, maybe revert that
for now, and then later land this new version (or a future revision of
it) once we've figured out if the new wording is good?
Documentation/locking/locktypes.rst | 23 +++++++++++++++++++++++
Documentation/locking/mutex-design.rst | 2 ++
kernel/locking/mutex.c | 5 +++++
kernel/locking/rwsem.c | 10 ++++++++++
4 files changed, 40 insertions(+)
diff --git a/Documentation/locking/locktypes.rst b/Documentation/locking/locktypes.rst
index 80c914f6eae7..c9a4bcc967ea 100644
--- a/Documentation/locking/locktypes.rst
+++ b/Documentation/locking/locktypes.rst
@@ -95,6 +95,29 @@ rw_semaphores have a special interface which allows non-owner release for
readers.
+Releasing and freeing
+=====================
+For some lock types, such as spinlocks, the lock release operation is designed
+to allow another concurrent task to free the lock as soon as the lock has been
+released - in other words, similarly to refcounts, the unlock operation will not
+access the lock object anymore after marking it as unlocked.
+
+This behavior is guaranteed for:
+
+ - spinlock_t (including in PREEMPT_RT kernels, where spinlock_t is
+ implemented as an rtmutex)
+
+There are other lock types where the lock release operation makes no such
+guarantee and the caller must ensure that the lock is not destroyed before the
+unlock operation has returned.
+Most sleeping locks are in this category.
+
+This is the case in particular for (not an exhaustive list):
+
+ - mutex
+ - rw_semaphore
+
+
rtmutex
=======
diff --git a/Documentation/locking/mutex-design.rst b/Documentation/locking/mutex-design.rst
index 78540cd7f54b..bbb4c4d56ed0 100644
--- a/Documentation/locking/mutex-design.rst
+++ b/Documentation/locking/mutex-design.rst
@@ -101,6 +101,8 @@ features that make lock debugging easier and faster:
- Detects multi-task circular deadlocks and prints out all affected
locks and tasks (and only those tasks).
+The mutex user must ensure that the mutex is not destroyed while a unlock
+operation is still in progress, see Documentation/locking/locktypes.rst.
Interfaces
----------
diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c
index 2deeeca3e71b..fa4834dba407 100644
--- a/kernel/locking/mutex.c
+++ b/kernel/locking/mutex.c
@@ -532,6 +532,11 @@ static noinline void __sched __mutex_unlock_slowpath(struct mutex *lock, unsigne
* This function must not be used in interrupt context. Unlocking
* of a not locked mutex is not allowed.
*
+ * The caller must ensure that the mutex stays alive until this function has
+ * returned - mutex_unlock() can NOT directly be used to release an object such
+ * that another concurrent task can free it.
+ * See Documentation/locking/locktypes.rst.
+ *
* This function is similar to (but not equivalent to) up().
*/
void __sched mutex_unlock(struct mutex *lock)
diff --git a/kernel/locking/rwsem.c b/kernel/locking/rwsem.c
index 2340b6d90ec6..cbc00a269deb 100644
--- a/kernel/locking/rwsem.c
+++ b/kernel/locking/rwsem.c
@@ -1615,6 +1615,11 @@ EXPORT_SYMBOL(down_write_trylock);
/*
* release a read lock
+ *
+ * The caller must ensure that the rw_semaphore stays alive until this function
+ * has returned - up_read() can NOT directly be used to release an object such
+ * that another concurrent task can free it.
+ * See Documentation/locking/locktypes.rst.
*/
void up_read(struct rw_semaphore *sem)
{
@@ -1625,6 +1630,11 @@ EXPORT_SYMBOL(up_read);
/*
* release a write lock
+ *
+ * The caller must ensure that the rw_semaphore stays alive until this function
+ * has returned - up_write() can NOT directly be used to release an object such
+ * that another concurrent task can free it.
+ * See Documentation/locking/locktypes.rst.
*/
void up_write(struct rw_semaphore *sem)
{