This is a note to let you know that I've just added the patch titled vhost: Use virtqueue mutex for swapping worker to the 6.9-stable tree which can be found at: http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary The filename of the patch is: vhost-use-virtqueue-mutex-for-swapping-worker.patch and it can be found in the queue-6.9 subdirectory. If you, or anyone else, feels it should not be added to the stable tree, please let <stable@xxxxxxxxxxxxxxx> know about it. commit 0fe495244d220b436e85ffbce013d6074ffde15b Author: Mike Christie <michael.christie@xxxxxxxxxx> Date: Fri Mar 15 19:47:04 2024 -0500 vhost: Use virtqueue mutex for swapping worker [ Upstream commit 34cf9ba5f00a222dddd9fc71de7c68fdaac7fb97 ] __vhost_vq_attach_worker uses the vhost_dev mutex to serialize the swapping of a virtqueue's worker. This was done for simplicity because we are already holding that mutex. In the next patches where the worker can be killed while in use, we need finer grained locking because some drivers will hold the vhost_dev mutex while flushing. However in the SIGKILL handler in the next patches, we will need to be able to swap workers (set current one to NULL), kill queued works and stop new flushes while flushes are in progress. To prepare us, this has us use the virtqueue mutex for swapping workers instead of the vhost_dev one. Signed-off-by: Mike Christie <michael.christie@xxxxxxxxxx> Message-Id: <20240316004707.45557-7-michael.christie@xxxxxxxxxx> Signed-off-by: Michael S. Tsirkin <mst@xxxxxxxxxx> Stable-dep-of: db5247d9bf5c ("vhost_task: Handle SIGKILL by flushing work and exiting") Signed-off-by: Sasha Levin <sashal@xxxxxxxxxx> diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c index 8995730ce0bfc..113b6a42719b7 100644 --- a/drivers/vhost/vhost.c +++ b/drivers/vhost/vhost.c @@ -664,16 +664,22 @@ static void __vhost_vq_attach_worker(struct vhost_virtqueue *vq, { struct vhost_worker *old_worker; - old_worker = rcu_dereference_check(vq->worker, - lockdep_is_held(&vq->dev->mutex)); - mutex_lock(&worker->mutex); - worker->attachment_cnt++; - mutex_unlock(&worker->mutex); + mutex_lock(&vq->mutex); + + old_worker = rcu_dereference_check(vq->worker, + lockdep_is_held(&vq->mutex)); rcu_assign_pointer(vq->worker, worker); + worker->attachment_cnt++; - if (!old_worker) + if (!old_worker) { + mutex_unlock(&vq->mutex); + mutex_unlock(&worker->mutex); return; + } + mutex_unlock(&vq->mutex); + mutex_unlock(&worker->mutex); + /* * Take the worker mutex to make sure we see the work queued from * device wide flushes which doesn't use RCU for execution.