Re: [PATCH 3/3] virtio-blk: Use block layer provided spinlock

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 05/25/2012 03:02 PM, Michael S. Tsirkin wrote:
On Fri, May 25, 2012 at 10:34:49AM +0800, Asias He wrote:
Block layer will allocate a spinlock for the queue if the driver does
not provide one in blk_init_queue().

The reason to use the internal spinlock is that blk_cleanup_queue() will
switch to use the internal spinlock in the cleanup code path.
         if (q->queue_lock !=&q->__queue_lock)
                 q->queue_lock =&q->__queue_lock;
However, processes which are in D state might have taken the driver
provided spinlock, when the processes wake up , they would release the
block provided spinlock.

=====================================
[ BUG: bad unlock balance detected! ]
3.4.0-rc7+ #238 Not tainted
-------------------------------------
fio/3587 is trying to release lock (&(&q->__queue_lock)->rlock) at:
[<ffffffff813274d2>] blk_queue_bio+0x2a2/0x380
but there are no more locks to release!

other info that might help us debug this:
1 lock held by fio/3587:
  #0:  (&(&vblk->lock)->rlock){......}, at:
[<ffffffff8132661a>] get_request_wait+0x19a/0x250

Other drivers use block layer provided spinlock as well, e.g. SCSI.  I
do not see any reason why we shouldn't,

OK, but the commit log is all wrong then, it should look like this:

	virtio uses an internal lock while block layer provides
	its own spinlock. Switching to the common lock saves
	a bit of memory and does not seem to have any disadvantages:
	this does not increase lock contention because .....
	Performance tests show no real difference: before ... after ...

Hmm. Why using the internal lock will have impact on the performance? Anyway I will update the commit log.


even the lock unblance issue can
be fixed by block layer.

s/even/even if/ ?

yes ;-)

The lock unblance issue wasn't yet discussed upstream, was it?

See the patch I sent this morning.

[PATCH] block: Fix lock unbalance caused by lock disconnect


Looking at it from the other side, even if virtio can
work around the issue, block layer should be fixed if
it's buggy. Or maybe it's not buggy and this is just masking
some other real issue?

Yes. I got your point. I am trying to fix block layer as well.

Does this mean it's inherently unsafe to use an internal spinlock?
Aren't there other drivers doing this?

I think so.

Cc: Rusty Russell<rusty@xxxxxxxxxxxxxxx>
Cc: "Michael S. Tsirkin"<mst@xxxxxxxxxx>
Cc: virtualization@xxxxxxxxxxxxxxxxxxxxxxxxxx
Cc: kvm@xxxxxxxxxxxxxxx
Signed-off-by: Asias He<asias@xxxxxxxxxx>
---
  drivers/block/virtio_blk.c |    9 +++------
  1 file changed, 3 insertions(+), 6 deletions(-)

diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
index b4fa2d7..774c31d 100644
--- a/drivers/block/virtio_blk.c
+++ b/drivers/block/virtio_blk.c
@@ -21,8 +21,6 @@ struct workqueue_struct *virtblk_wq;

  struct virtio_blk
  {
-	spinlock_t lock;
-
  	struct virtio_device *vdev;
  	struct virtqueue *vq;

@@ -65,7 +63,7 @@ static void blk_done(struct virtqueue *vq)
  	unsigned int len;
  	unsigned long flags;

-	spin_lock_irqsave(&vblk->lock, flags);
+	spin_lock_irqsave(vblk->disk->queue->queue_lock, flags);
  	while ((vbr = virtqueue_get_buf(vblk->vq,&len)) != NULL) {
  		int error;

@@ -99,7 +97,7 @@ static void blk_done(struct virtqueue *vq)
  	}
  	/* In case queue is stopped waiting for more buffers. */
  	blk_start_queue(vblk->disk->queue);
-	spin_unlock_irqrestore(&vblk->lock, flags);
+	spin_unlock_irqrestore(vblk->disk->queue->queue_lock, flags);
  }

  static bool do_req(struct request_queue *q, struct virtio_blk *vblk,
@@ -431,7 +429,6 @@ static int __devinit virtblk_probe(struct virtio_device *vdev)
  		goto out_free_index;
  	}

-	spin_lock_init(&vblk->lock);
  	vblk->vdev = vdev;
  	vblk->sg_elems = sg_elems;
  	sg_init_table(vblk->sg, vblk->sg_elems);
@@ -456,7 +453,7 @@ static int __devinit virtblk_probe(struct virtio_device *vdev)
  		goto out_mempool;
  	}

-	q = vblk->disk->queue = blk_init_queue(do_virtblk_request,&vblk->lock);
+	q = vblk->disk->queue = blk_init_queue(do_virtblk_request, NULL);
  	if (!q) {
  		err = -ENOMEM;
  		goto out_put_disk;
--
1.7.10.2


--
Asias
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux