On 18/09/2023 15:05, David Woodhouse wrote:
On 18 September 2023 14:41:08 BST, Paul Durrant <xadimgnik@xxxxxxxxx> wrote:
Well, if the VMM is using the default then it can't unmap it. But setting a vcpu_info *after* enabling any event channels would be a very odd thing for a guest to do and IMO it gets to keep the pieces if it does so.
Hm, I suppose I'm OK with that approach. The fact that both VMM implementations using this KVM/Xen support let the guest keep precisely those pieces is a testament to that :)
I can have the selftest explicitly set the vcpu_info to point at the one
that's already in use, I suppose... so the would at least make sure the
attribute is functioning.
But now we're hard-coding the behaviour in the kernel and declaring that no VMM will be *able* to "fix" that case even if it does want to. So perhaps it wants a modicum more thought and at least some explicit documentation to that effect?
And a hand-wavy plan at least for what we'd do if we suddenly did find a reason to care?
Handwavy plan would be for the VMM to:
a) Mask all open event channels targetting the vcpu
b) Copy vcpu_info content to the new location
c) Tell KVM where it is
d) Unmask the masked event channels
Does that sound ok? If so I can stick it in the API documentation.