* Michael S. Tsirkin (mst@xxxxxxxxxx) wrote: > On Mon, Jan 04, 2016 at 07:11:25PM -0800, Alexander Duyck wrote: > > >> The two mechanisms referenced above would likely require coordination with > > >> QEMU and as such are open to discussion. I haven't attempted to address > > >> them as I am not sure there is a consensus as of yet. My personal > > >> preference would be to add a vendor-specific configuration block to the > > >> emulated pci-bridge interfaces created by QEMU that would allow us to > > >> essentially extend shpc to support guest live migration with pass-through > > >> devices. > > > > > > shpc? > > > > That is kind of what I was thinking. We basically need some mechanism > > to allow for the host to ask the device to quiesce. It has been > > proposed to possibly even look at something like an ACPI interface > > since I know ACPI is used by QEMU to manage hot-plug in the standard > > case. > > > > - Alex > > > Start by using hot-unplug for this! > > Really use your patch guest side, and write host side > to allow starting migration with the device, but > defer completing it. > > So > > 1.- host tells guest to start tracking memory writes > 2.- guest acks > 3.- migration starts > 4.- most memory is migrated > 5.- host tells guest to eject device > 6.- guest acks > 7.- stop vm and migrate rest of state > > > It will already be a win since hot unplug after migration starts and > most memory has been migrated is better than hot unplug before migration > starts. > > Then measure downtime and profile. Then we can look at ways > to quiesce device faster which really means step 5 is replaced > with "host tells guest to quiesce device and dirty (or just unmap!) > all memory mapped for write by device". Doing a hot-unplug is going to upset the guests network stacks view of the world; that's something we don't want to change. Dave > > -- > MST -- Dr. David Alan Gilbert / dgilbert@xxxxxxxxxx / Manchester, UK -- To unsubscribe from this list: send the line "unsubscribe linux-pci" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html