On Mon, 2023-09-18 at 17:15 +0100, Paul Durrant wrote: > > > + Note that, if the guest sets an explicit vcpu_info location in guest > > > + memory then the VMM is expected to copy the content of the structure > > > + embedded in the shared_info page to the new location. It is therefore > > > + important that no event delivery is in progress at this time, otherwise > > > + events may be missed. > > > > > > > That's difficult. It means tearing down all interrupts from passthrough > > devices which are mapped via PIRQs, and also all IPIs. > > So those don't honour event channel masking? That seems like a problem. Oh, *mask*. Sure, it does honour masking. But... that would mean the VMM has to keep track of which ports were *really* masked by the guest, and which ones were just masked for the switchover. Including if the guest does some mask/unmask activity *while* the switchover is happening (or locking to prevent such). I still don't think that's a kind thing to be telling the VMMs they need to do. > > > > The IPI code *should* be able to fall back to just letting the VMM > > handle the hypercall in userspace. But PIRQs are harder. I'd be happier > > if our plan — handwavy though it may be — led to being able to use the > > existing slow path for delivering interrupts by just *invalidating* the > > cache. Maybe we *should* move the memcpy into the kernel, and let it > > lock *both* the shinfo and new vcpu_info caches while it's doing the > > copy? Given that that's the only valid transition, that shouldn't be so > > hard, should it? > > > > No, it just kind of oversteps the remit of the attribute... but I'll try > adding it and see how messy it gets. Well, there's a reason I left all the vcpu_info address magic in userspace in the first place. It was there in João's original patches and I ripped it all out. But I see your logic for wanting to put it back; I suspect moving the memcpy too is part of the cost of that? Should work out OK, I think.
Attachment:
smime.p7s
Description: S/MIME cryptographic signature