On Tue, 15 Feb 2022 12:04:19 -0400 Jason Gunthorpe <jgg@xxxxxxxxxx> wrote: > On Tue, Feb 15, 2022 at 10:41:56AM +0000, Tian, Kevin wrote: > > > From: Jason Gunthorpe <jgg@xxxxxxxxxx> > > > Sent: Wednesday, February 9, 2022 10:37 AM > > > > > > > > /* -------- API for Type1 VFIO IOMMU -------- */ > > > > > > > > > > /** > > > > > > > > Otherwise, I'm still not sure how userspace handles the fact that it > > > > can't know how much data will be read from the device and how important > > > > that is. There's no replacement of that feature from the v1 protocol > > > > here. > > > > > > I'm not sure this was part of the v1 protocol either. Yes it had a > > > pending_bytes, but I don't think it was actually expected to be 100% > > > accurate. Computing this value accurately is potentially quite > > > expensive, I would prefer we not enforce this on an implementation > > > without a reason, and qemu currently doesn't make use of it. > > > > > > The ioctl from the precopy patch is probably the best approach, I > > > think it would be fine to allow that for stop copy as well, but also > > > don't see a usage right now. > > > > > > It is not something that needs decision now, it is very easy to detect > > > if an ioctl is supported on the data_fd at runtime to add new things > > > here when needed. > > > > > > > Another interesting thing (not an immediate concern on this series) > > is how to handle devices which may have long time (e.g. due to > > draining outstanding requests, even w/o vPRI) to enter the STOP > > state. that time is not as deterministic as pending bytes thus cannot > > be reported back to the user before the operation is actually done. > > Well, it is not deterministic at all.. > > I suppose you have to do as Alex says and try to estimate how much > time the stop phase of migration will take and grant only the > remaining time from the SLA to the guest to finish its PRI flushing, > otherwise go back to PRE_COPY and try again later if the timer hits. > > This suggests to me the right interface from the driver is some > estimate of time to enter STOP_COPY and resulting required transfer > size. > > Still, I just don't see how SLAs can really be feasible with this kind > of HW that requires guest co-operation.. Devil's advocate, does this discussion raise any concerns whether a synchronous vs asynchronous arc transition ioctl is still the right solution here? I can imagine for instance that posting a state change and being able to poll for pending transactions or completion of the saved state generation and ultimate size could be very useful for managing migration SLAs, not to mention trivial userspace support to parallel'ize state changes. Reporting a maximum device state size hint also seems relatively trivial since this should just be the sum of on-device memory, asics, and processors. The mlx5 driver already places an upper bound on migration data size internally. Maybe some of these can come as DEVICE_FEATURES as we go, but for any sort of cloud vendor SLA, I'm afraid we're only enabling migration of devices with negligible transition latencies and negligible device states, with some hand waving how to determine that either of those are the case without device specific knowledge in the orchestration. Thanks, Alex