Re: [PATCH V7 mlx5-next 08/15] vfio: Define device migration protocol v2

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Feb 15, 2022 at 04:32:31PM -0700, Alex Williamson wrote:

> > I suppose you have to do as Alex says and try to estimate how much
> > time the stop phase of migration will take and grant only the
> > remaining time from the SLA to the guest to finish its PRI flushing,
> > otherwise go back to PRE_COPY and try again later if the timer hits.
> > 
> > This suggests to me the right interface from the driver is some
> > estimate of time to enter STOP_COPY and resulting required transfer
> > size.
> > 
> > Still, I just don't see how SLAs can really be feasible with this kind
> > of HW that requires guest co-operation..
> 
> Devil's advocate, does this discussion raise any concerns whether a
> synchronous vs asynchronous arc transition ioctl is still the right
> solution here?  

v2 switched to the data_fd which allows almost everything important to
be async, assuming someone wants to implement it in qemu and a driver.

It allows RUNNING -> STOP_COPY to be made async because the driver can
return SET_STATE immediately, backround the state save and indicate
completion/progress/error via poll(readable) on the data_fd. However
the device does still have to suspend DMA synchronously.

RESUMING -> STOP can also be async. The driver will make the data_fd
not writable before the last byte using its internal knowledge of the
data framing. Once the driver allows the last byte to be delivered
qemu will immediately do SET_STATE which will be low latency.

The entire data transfer flow itself is now async event driven and can
be run in parallel across devices with an epoll or iouring type
scheme.

STOP->RUNNING should be low latency for any reasonable device design.

For the P2P extension the RUNNING -> RUNNING_P2P has stopped vCPUs,
but I think a reasonable implementation must make this low latency,
just like suspending DMA to get to STOP_COPY must be low latency.
Making it async won't make it faster, though I would like to see it
run in parallel for all P2P devices.

The other arcs have the vCPU running, so don't matter to this.

In essence, compared to v1, we already made it almost fully async.

Also, at least with the mlx5 design, we can run all the commands async
(though there is a blocker preventing this right now) however we
cannot abort commands in progress. So as far as a SLA is concerned I
don't think async necessarily helps much.

I also think acc and several other drivers we are looking at would not
implement, or gain any advantage from async arcs.

Are there more arcs that benefit from async? PRI draining has come
up.

Keep in mind, qemu can still userspace thread SET_STATE. There has
also been talk about a generic iouring based kernel threaded
ioctl: https://lwn.net/Articles/844875/

What I suggested to Kevin is also something to look at, userspace
provides an event FD to SET_STATE and the event FD is triggered when
the background action is done.

So, I'm not worried about this. There are more than enough options to
address any async requirements down the road.

> and processors.  The mlx5 driver already places an upper bound on
> migration data size internally.

We did that because it seemed unreasonable to allow userspace to
allocate unlimited kernel memory during resuming. Ideally we'd limit
it to the device's max capability but the device doesn't know how to
do that today.

> Maybe some of these can come as DEVICE_FEATURES as we go, but for any
> sort of cloud vendor SLA, I'm afraid we're only enabling migration of
> devices with negligible transition latencies and negligible device
> states

Even if this is true, it is not a failure! Most of the migration
drivers we foresee are of this class.

My feeling is that more complex devices would benefit from some stuff,
eg like estimating times, but I'd rather collect actual field data and
understand where things lie, and what device changes are needed,
before we design something.

> with some hand waving how to determine that either of those are
> the case without device specific knowledge in the orchestration.

I don't think the orchestration necessarily needs special
knowledge. Certainly when the cloud operator designs the VMs and sets
the SLA parameters they need to do it with understanding of what the
mix of devices are and what kind of migration performance they get out
of the entire system.

More than anything system migration performance is going to be
impacted by the network for devices like mlx5 that have a non-trivial
STOP_COPY data blob.

Basically, I think it is worth thinking about, but not worth acting on
right now.

Jason



[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux