On Tue, 2023-09-19 at 16:47 +0100, Paul Durrant wrote: > > > > > I think they look interchangeable in this case. If we *do* take them > > both in kvm_xen_set_evtchn_fast() then maybe we can simplify the slow > > path where it set the bits in shared_info but then the vcpu_info gpc > > was invalid. That currently uses a kvm->arch.xen.evtchn_pending_sel > > shadow of the bits, and just kicks the vCPU to deliver them for > > itself... but maybe that whole thing could be dropped, and > > kvm_xen_set_evtchn_fast() can just return EWOULDBLOCK if it fails to > > lock *both* shared_info and vcpu_info at the same time? > > > > Yes, I think that sounds like a neater approach. > > > I didn't do that before, because I didn't want to introduce lock > > ordering rules. But I'm happier to do so now. And I think we can ditch > > a lot of hairy asm in kvm_xen_inject_pending_events() ? > > > > Messing with the asm sounds like something for a follow-up though. AFAICT we can just delete the whole bloody lot. But yes, it's definitely a later cleanup, enabled by the fact that we (will) now have an agreed way of taking both locks at once, which I didn't want to do in the first place.
Attachment:
smime.p7s
Description: S/MIME cryptographic signature