On Mon, Aug 16, 2021 at 05:00:21PM +0200, Paolo Bonzini wrote: > On 16/08/21 16:23, Daniel P. Berrangé wrote: > > snip > > > > > With this implementation, the number of mirror vCPUs does not even have to > > > be indicated on the command line. The VM and its vCPUs can simply be > > > created when migration starts. In the SEV-ES case, the guest can even > > > provide the VMSA that starts the migration helper. > > > > I don't think management apps will accept that approach when pinning > > guests. They will want control over how many extra vCPU threads are > > created, what host pCPUs they are pinned to, and what schedular > > policies might be applied to them. > > That doesn't require creating the migration threads at startup, or making > them vCPU threads, does it? > > The migration helper is guest code that is run within the context of the > migration helper in order to decrypt/re-encrypt the code (using a different > tweak that is based on e.g. the ram_addr_t rather than the HPA). How does > libvirt pin migration thread(s) currently? I don't think we do explicit pinning of migration related threads right now, which means they'll inherit characteristics of whichever thread spawns the transient migration thread. If the mgmt app has pinned the emulator threads to a single CPU, then creating many migration threads is a waste of time as they'll compete with each other. I woudn't be needed to create migration threads at startup - we should just think about how we would identify and control them when created at runtime. The complexity here is a trust issue - once guest code has been run, we can't trust what QMP tells us - so I'm not sure how we would learn what PIDs are associated with the transiently created migration threads, in order to set their properties. Regards, Daniel -- |: https://berrange.com -o- https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o- https://fstop138.berrange.com :| |: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|