On Thu, Feb 22, 2024 at 04:46:38AM +0000, Parav Pandit wrote: > > > > From: Stefan Hajnoczi <stefanha@xxxxxxxxxx> > > Sent: Wednesday, February 21, 2024 3:35 AM > > To: Parav Pandit <parav@xxxxxxxxxx> > > > > On Sat, Feb 17, 2024 at 08:08:48PM +0200, Parav Pandit wrote: > > > When the PCI device is surprise removed, requests won't complete from > > > the device. These IOs are never completed and disk deletion hangs > > > indefinitely. > > > > > > Fix it by aborting the IOs which the device will never complete when > > > the VQ is broken. > > > > > > With this fix now fio completes swiftly. > > > An alternative of IO timeout has been considered, however when the > > > driver knows about unresponsive block device, swiftly clearing them > > > enables users and upper layers to react quickly. > > > > > > Verified with multiple device unplug cycles with pending IOs in virtio > > > used ring and some pending with device. > > > > > > In future instead of VQ broken, a more elegant method can be used. At > > > the moment the patch is kept to its minimal changes given its urgency > > > to fix broken kernels. > > > > > > Fixes: 43bb40c5b926 ("virtio_pci: Support surprise removal of virtio > > > pci device") > > > Cc: stable@xxxxxxxxxxxxxxx > > > Reported-by: lirongqing@xxxxxxxxx > > > Closes: > > > https://lore.kernel.org/virtualization/c45dd68698cd47238c55fb73ca9b474 > > > 1@xxxxxxxxx/ > > > Co-developed-by: Chaitanya Kulkarni <kch@xxxxxxxxxx> > > > Signed-off-by: Chaitanya Kulkarni <kch@xxxxxxxxxx> > > > Signed-off-by: Parav Pandit <parav@xxxxxxxxxx> > > > --- > > > drivers/block/virtio_blk.c | 54 > > > ++++++++++++++++++++++++++++++++++++++ > > > 1 file changed, 54 insertions(+) > > > > > > diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c > > > index 2bf14a0e2815..59b49899b229 100644 > > > --- a/drivers/block/virtio_blk.c > > > +++ b/drivers/block/virtio_blk.c > > > @@ -1562,10 +1562,64 @@ static int virtblk_probe(struct virtio_device > > *vdev) > > > return err; > > > } > > > > > > +static bool virtblk_cancel_request(struct request *rq, void *data) { > > > + struct virtblk_req *vbr = blk_mq_rq_to_pdu(rq); > > > + > > > + vbr->in_hdr.status = VIRTIO_BLK_S_IOERR; > > > + if (blk_mq_request_started(rq) && !blk_mq_request_completed(rq)) > > > + blk_mq_complete_request(rq); > > > + > > > + return true; > > > +} > > > + > > > +static void virtblk_cleanup_reqs(struct virtio_blk *vblk) { > > > + struct virtio_blk_vq *blk_vq; > > > + struct request_queue *q; > > > + struct virtqueue *vq; > > > + unsigned long flags; > > > + int i; > > > + > > > + vq = vblk->vqs[0].vq; > > > + if (!virtqueue_is_broken(vq)) > > > + return; > > > + > > > + q = vblk->disk->queue; > > > + /* Block upper layer to not get any new requests */ > > > + blk_mq_quiesce_queue(q); > > > + > > > + for (i = 0; i < vblk->num_vqs; i++) { > > > + blk_vq = &vblk->vqs[i]; > > > + > > > + /* Synchronize with any ongoing virtblk_poll() which may be > > > + * completing the requests to uppper layer which has already > > > + * crossed the broken vq check. > > > + */ > > > + spin_lock_irqsave(&blk_vq->lock, flags); > > > + spin_unlock_irqrestore(&blk_vq->lock, flags); > > > + } > > > + > > > + blk_sync_queue(q); > > > + > > > + /* Complete remaining pending requests with error */ > > > + blk_mq_tagset_busy_iter(&vblk->tag_set, virtblk_cancel_request, > > > +vblk); > > > > Interrupts can still occur here. What prevents the race between > > virtblk_cancel_request() and virtblk_request_done()? > > > The PCI device which generates the interrupt is already removed so interrupt shouldn't arrive when executing cancel_request. > (This is ignoring the race that Ming pointed out. I am preparing the v1 that eliminates such condition.) > > If there was ongoing virtblk_request_done() is synchronized by the for loop above. Ah, I see now that: +if (!virtqueue_is_broken(vq)) + return; relates to: static void virtio_pci_remove(struct pci_dev *pci_dev) { struct virtio_pci_device *vp_dev = pci_get_drvdata(pci_dev); struct device *dev = get_device(&vp_dev->vdev.dev); /* * Device is marked broken on surprise removal so that virtio upper * layers can abort any ongoing operation. */ if (!pci_device_is_present(pci_dev)) virtio_break_device(&vp_dev->vdev); Please rename virtblk_cleanup_reqs() to virtblk_cleanup_broken_device() or similar so it's clear that this function only applies when the device is broken? For example, it won't handle ACPI hot unplug requests because the device will still be present. Thanks, Stefan > > > > > + blk_mq_tagset_wait_completed_request(&vblk->tag_set); > > > + > > > + /* > > > + * Unblock any pending dispatch I/Os before we destroy device. From > > > + * del_gendisk() -> __blk_mark_disk_dead(disk) will set GD_DEAD > > flag, > > > + * that will make sure any new I/O from bio_queue_enter() to fail. > > > + */ > > > + blk_mq_unquiesce_queue(q); > > > +} > > > + > > > static void virtblk_remove(struct virtio_device *vdev) { > > > struct virtio_blk *vblk = vdev->priv; > > > > > > + virtblk_cleanup_reqs(vblk); > > > + > > > /* Make sure no work handler is accessing the device. */ > > > flush_work(&vblk->config_work); > > > > > > -- > > > 2.34.1 > > > >
Attachment:
signature.asc
Description: PGP signature