> Subject: RE: [PATCH 2/2] Drivers: hv: add XStore Fastpath driver > > From: Long Li <longli@xxxxxxxxxxxxx> Sent: Wednesday, June 23, 2021 3:59 > PM > > [snip] > > > > > > > + > > > > > > +static void xs_fastpath_remove_vmbus(struct hv_device *device) { > > > > > > + struct xs_fastpath_device *dev = hv_get_drvdata(device); > > > > > > + unsigned long flags; > > > > > > + > > > > > > + spin_lock_irqsave(&dev->vsp_pending_lock, flags); > > > > > > + if (!list_empty(&dev->vsp_pending_list)) { > > > > > > > > > > I don't think this can ever happen. If the device has already > > > > > been removed by xs_fastpath_remove_device(), then we know that > > > > > the device isn't open in any processes, so there can't be any > > > > > ioctl's in progress that would have entries in the vsp_pending_list. > > > > > > > > The VSC is designed to support asynchronous I/O (while not > > > > implemented in this version), so vsp_pending_list is needed if a > > > > user-mode decides to close the file and not waiting for I/O. > > > > > > I was thinking more about the handling of asynchronous I/Os. As I noted > previously, this function is called after we know that the device isn't open in > any processes. In fact, a process that previously had the device open might > have terminated. So it seems problematic to have async I/Os still pending, > since they would have passed guest physical addresses to Hyper-V. If the > process making the original async I/O request has terminated, presumably any > pinned pages have been unpinned, so who knows how the memory is now > being used in the guest. > > With that thinking in mind, it seems like waiting for any pending asynchronous > I/Os needs to be done in xs_fastpath_fop_release, not here. The waiting > would have to be only for asynchronous I/Os associated with that particular > open of the device. With that waiting in place, when the close completes we > know that no memory is pinned for associated asynchronous I/Os, and Hyper- > V doesn't have any PFNs that would be problematic if it later wrote to them. > > Michael I'm making changes in v2 as you suggested. Long > > > > > > > > > > > > + spin_unlock_irqrestore(&dev->vsp_pending_lock, > flags); > > > > > > + xs_fastpath_dbg("wait for vsp_pending_list\n"); > > > > > > + wait_event(dev->wait_vsp, > > > > > > + list_empty(&dev->vsp_pending_list)); > > > > > > + } else > > > > > > + spin_unlock_irqrestore(&dev->vsp_pending_lock, > flags); > > > > > > + > > > > > > + /* At this point, no VSC/VSP traffic is possible over vmbus */ > > > > > > + hv_set_drvdata(device, NULL); > > > > > > + vmbus_close(device->channel); } > > > > > > + > > > > > > +static int xs_fastpath_probe(struct hv_device *device, > > > > > > + const struct hv_vmbus_device_id *dev_id) { > > > > > > + int rc; > > > > > > + > > > > > > + xs_fastpath_dbg("probing device\n"); > > > > > > + > > > > > > + rc = xs_fastpath_connect_to_vsp(device, > xs_fastpath_ringbuffer_size); > > > > > > + if (rc) { > > > > > > + xs_fastpath_err("error connecting to VSP rc %d\n", > rc); > > > > > > + return rc; > > > > > > + } > > > > > > + > > > > > > + // create user-mode client library facing device > > > > > > + rc = xs_fastpath_create_device(&xs_fastpath_dev); > > > > > > + if (rc) { > > > > > > + xs_fastpath_remove_vmbus(device); > > > > > > + return rc; > > > > > > + } > > > > > > + > > > > > > + xs_fastpath_dbg("successfully probed device\n"); > > > > > > + return 0; > > > > > > +} > > > > > > + > > > > > > +static int xs_fastpath_remove(struct hv_device *dev) { > > > > > > + struct xs_fastpath_device *device = hv_get_drvdata(dev); > > > > > > + > > > > > > + device->removing = true; > > > > > > + xs_fastpath_remove_device(device); > > > > > > + xs_fastpath_remove_vmbus(dev); > > > > > > + return 0; > > > > > > +} > > > > > > + > > > > > > +static struct hv_driver xs_fastpath_drv = { > > > > > > + .name = KBUILD_MODNAME, > > > > > > + .id_table = id_table, > > > > > > + .probe = xs_fastpath_probe, > > > > > > + .remove = xs_fastpath_remove, > > > > > > + .driver = { > > > > > > + .probe_type = PROBE_PREFER_ASYNCHRONOUS, > > > > > > + }, > > > > > > +}; > > > > > > +