RE: [PATCH] virtio_blk: Fix device surprise removal

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Ming,

> From: Ming Lei <ming.lei@xxxxxxxxxx>
> Sent: Sunday, February 18, 2024 6:57 PM
> 
> On Sat, Feb 17, 2024 at 08:08:48PM +0200, Parav Pandit wrote:
> > When the PCI device is surprise removed, requests won't complete from
> > the device. These IOs are never completed and disk deletion hangs
> > indefinitely.
> >
> > Fix it by aborting the IOs which the device will never complete when
> > the VQ is broken.
> >
> > With this fix now fio completes swiftly.
> > An alternative of IO timeout has been considered, however when the
> > driver knows about unresponsive block device, swiftly clearing them
> > enables users and upper layers to react quickly.
> >
> > Verified with multiple device unplug cycles with pending IOs in virtio
> > used ring and some pending with device.
> >
> > In future instead of VQ broken, a more elegant method can be used. At
> > the moment the patch is kept to its minimal changes given its urgency
> > to fix broken kernels.
> >
> > Fixes: 43bb40c5b926 ("virtio_pci: Support surprise removal of virtio
> > pci device")
> > Cc: stable@xxxxxxxxxxxxxxx
> > Reported-by: lirongqing@xxxxxxxxx
> > Closes:
> > https://lore.kernel.org/virtualization/c45dd68698cd47238c55fb73ca9b474
> > 1@xxxxxxxxx/
> > Co-developed-by: Chaitanya Kulkarni <kch@xxxxxxxxxx>
> > Signed-off-by: Chaitanya Kulkarni <kch@xxxxxxxxxx>
> > Signed-off-by: Parav Pandit <parav@xxxxxxxxxx>
> > ---
> >  drivers/block/virtio_blk.c | 54
> > ++++++++++++++++++++++++++++++++++++++
> >  1 file changed, 54 insertions(+)
> >
> > diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
> > index 2bf14a0e2815..59b49899b229 100644
> > --- a/drivers/block/virtio_blk.c
> > +++ b/drivers/block/virtio_blk.c
> > @@ -1562,10 +1562,64 @@ static int virtblk_probe(struct virtio_device
> *vdev)
> >  	return err;
> >  }
> >
> > +static bool virtblk_cancel_request(struct request *rq, void *data) {
> > +	struct virtblk_req *vbr = blk_mq_rq_to_pdu(rq);
> > +
> > +	vbr->in_hdr.status = VIRTIO_BLK_S_IOERR;
> > +	if (blk_mq_request_started(rq) && !blk_mq_request_completed(rq))
> > +		blk_mq_complete_request(rq);
> > +
> > +	return true;
> > +}
> > +
> > +static void virtblk_cleanup_reqs(struct virtio_blk *vblk) {
> > +	struct virtio_blk_vq *blk_vq;
> > +	struct request_queue *q;
> > +	struct virtqueue *vq;
> > +	unsigned long flags;
> > +	int i;
> > +
> > +	vq = vblk->vqs[0].vq;
> > +	if (!virtqueue_is_broken(vq))
> > +		return;
> > +
> 
> What if the surprise happens after the above check?
> 
> 
In that small timing window, the race still exists.

I think, blk_mq_quiesce_queue(q); should move up before cleanup_reqs() regardless of surprise case along with other below changes.

Additionally, for non-surprise case, better to have a graceful timeout to complete already queued requests.
In absence of timeout scheme for this regression, shall we only complete the requests which the device has already completed (instead of waiting for the grace time)?
There was past work from Chaitanaya, for the graceful timeout.

The sequence for the fix I have in mind is:
1. quiesce the queue
2. complete all requests which has completed, with its status
3. stop the transport (queues)
4. complete remaining pending requests with error status

This should work regardless of surprise case.
An additional/optional graceful timeout on non-surprise case can be helpful for #2.

WDYT?

> Thanks,
> Ming






[Index of Archives]     [KVM Development]     [Libvirt Development]     [Libvirt Users]     [CentOS Virtualization]     [Netdev]     [Ethernet Bridging]     [Linux Wireless]     [Kernel Newbies]     [Security]     [Linux for Hams]     [Netfilter]     [Bugtraq]     [Yosemite Forum]     [MIPS Linux]     [ARM Linux]     [Linux RAID]     [Linux Admin]     [Samba]

  Powered by Linux