> From: Jason Gunthorpe <jgg@xxxxxxxxxx> > Sent: Wednesday, February 16, 2022 12:04 AM > > On Tue, Feb 15, 2022 at 10:41:56AM +0000, Tian, Kevin wrote: > > > From: Jason Gunthorpe <jgg@xxxxxxxxxx> > > > Sent: Wednesday, February 9, 2022 10:37 AM > > > > > > > > /* -------- API for Type1 VFIO IOMMU -------- */ > > > > > > > > > > /** > > > > > > > > Otherwise, I'm still not sure how userspace handles the fact that it > > > > can't know how much data will be read from the device and how > important > > > > that is. There's no replacement of that feature from the v1 protocol > > > > here. > > > > > > I'm not sure this was part of the v1 protocol either. Yes it had a > > > pending_bytes, but I don't think it was actually expected to be 100% > > > accurate. Computing this value accurately is potentially quite > > > expensive, I would prefer we not enforce this on an implementation > > > without a reason, and qemu currently doesn't make use of it. > > > > > > The ioctl from the precopy patch is probably the best approach, I > > > think it would be fine to allow that for stop copy as well, but also > > > don't see a usage right now. > > > > > > It is not something that needs decision now, it is very easy to detect > > > if an ioctl is supported on the data_fd at runtime to add new things > > > here when needed. > > > > > > > Another interesting thing (not an immediate concern on this series) > > is how to handle devices which may have long time (e.g. due to > > draining outstanding requests, even w/o vPRI) to enter the STOP > > state. that time is not as deterministic as pending bytes thus cannot > > be reported back to the user before the operation is actually done. > > Well, it is not deterministic at all.. > > I suppose you have to do as Alex says and try to estimate how much > time the stop phase of migration will take and grant only the > remaining time from the SLA to the guest to finish its PRI flushing, Let's separate it from PRI stuff thus no guest operation. It's a simple story that vCPUs have been stopped and Qemu requests state transition from RUNNING to STOP on a device which needs migration driver to drain outstanding requests before being stopped. those requests don't rely on vCPUs but still take time to complete (thus may break SLA) and are invisible to migration driver (directly submitted by the guest thus cannot be estimated). So the only means is for user to wait on a fd with a timeout (based on whatever SLA) and if expires then aborts migration (may retry later). I'm not sure whether we want to leverage the new arc for vPRI or just allow changing the STOP behavior to return a eventfd for an async transition. Thanks Kevin