This is a note to let you know that I've just added the patch titled virtio-blk: Use block layer provided spinlock to the 3.4-stable tree which can be found at: http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary The filename of the patch is: virtio-blk-use-block-layer-provided-spinlock.patch and it can be found in the queue-3.4 subdirectory. If you, or anyone else, feels it should not be added to the stable tree, please let <stable@xxxxxxxxxxxxxxx> know about it. >From e551857183b52b1af787e35b25a4e556fa0c0c75 Mon Sep 17 00:00:00 2001 From: Asias He <asias@xxxxxxxxxx> Date: Fri, 25 May 2012 16:03:27 +0800 Subject: virtio-blk: Use block layer provided spinlock From: Asias He <asias@xxxxxxxxxx> commit 2c95a3290919541b846bee3e0fbaa75860929f53 upstream. Block layer will allocate a spinlock for the queue if the driver does not provide one in blk_init_queue(). The reason to use the internal spinlock is that blk_cleanup_queue() will switch to use the internal spinlock in the cleanup code path. if (q->queue_lock != &q->__queue_lock) q->queue_lock = &q->__queue_lock; However, processes which are in D state might have taken the driver provided spinlock, when the processes wake up, they would release the block provided spinlock. ===================================== [ BUG: bad unlock balance detected! ] 3.4.0-rc7+ #238 Not tainted ------------------------------------- fio/3587 is trying to release lock (&(&q->__queue_lock)->rlock) at: [<ffffffff813274d2>] blk_queue_bio+0x2a2/0x380 but there are no more locks to release! other info that might help us debug this: 1 lock held by fio/3587: #0: (&(&vblk->lock)->rlock){......}, at: [<ffffffff8132661a>] get_request_wait+0x19a/0x250 Other drivers use block layer provided spinlock as well, e.g. SCSI. Switching to the block layer provided spinlock saves a bit of memory and does not increase lock contention. Performance test shows no real difference is observed before and after this patch. Changes in v2: Improve commit log as Michael suggested. Cc: virtualization@xxxxxxxxxxxxxxxxxxxxxxxxxx Cc: kvm@xxxxxxxxxxxxxxx Signed-off-by: Asias He <asias@xxxxxxxxxx> Acked-by: Michael S. Tsirkin <mst@xxxxxxxxxx> Signed-off-by: Rusty Russell <rusty@xxxxxxxxxxxxxxx> [bwh: Backported to 3.2: adjust context] Signed-off-by: Ben Hutchings <ben@xxxxxxxxxxxxxxx> Cc: Li Zefan <lizefan@xxxxxxxxxx> Signed-off-by: Greg Kroah-Hartman <gregkh@xxxxxxxxxxxxxxxxxxx> --- drivers/block/virtio_blk.c | 9 +++------ 1 file changed, 3 insertions(+), 6 deletions(-) --- a/drivers/block/virtio_blk.c +++ b/drivers/block/virtio_blk.c @@ -21,8 +21,6 @@ struct workqueue_struct *virtblk_wq; struct virtio_blk { - spinlock_t lock; - struct virtio_device *vdev; struct virtqueue *vq; @@ -69,7 +67,7 @@ static void blk_done(struct virtqueue *v unsigned int len; unsigned long flags; - spin_lock_irqsave(&vblk->lock, flags); + spin_lock_irqsave(vblk->disk->queue->queue_lock, flags); while ((vbr = virtqueue_get_buf(vblk->vq, &len)) != NULL) { int error; @@ -104,7 +102,7 @@ static void blk_done(struct virtqueue *v } /* In case queue is stopped waiting for more buffers. */ blk_start_queue(vblk->disk->queue); - spin_unlock_irqrestore(&vblk->lock, flags); + spin_unlock_irqrestore(vblk->disk->queue->queue_lock, flags); } static bool do_req(struct request_queue *q, struct virtio_blk *vblk, @@ -438,7 +436,6 @@ static int __devinit virtblk_probe(struc } INIT_LIST_HEAD(&vblk->reqs); - spin_lock_init(&vblk->lock); vblk->vdev = vdev; vblk->sg_elems = sg_elems; sg_init_table(vblk->sg, vblk->sg_elems); @@ -463,7 +460,7 @@ static int __devinit virtblk_probe(struc goto out_mempool; } - q = vblk->disk->queue = blk_init_queue(do_virtblk_request, &vblk->lock); + q = vblk->disk->queue = blk_init_queue(do_virtblk_request, NULL); if (!q) { err = -ENOMEM; goto out_put_disk; Patches currently in stable-queue which might be from asias@xxxxxxxxxx are queue-3.4/virtio-blk-use-block-layer-provided-spinlock.patch -- To unsubscribe from this list: send the line "unsubscribe stable" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html