On Wed, 2024-01-17 at 08:51 -0800, Sean Christopherson wrote: > On Wed, Jan 17, 2024, David Woodhouse wrote: > > On Wed, 2024-01-17 at 08:09 -0800, Sean Christopherson wrote: > > > On Fri, Jan 12, 2024, David Woodhouse wrote: > > > As you note above, some other mutex _should_ be held. I think we should lean > > > into that. E.g. > > > > I don't. I'd like this code to stand alone *without* making the caller > > depend on "some other lock" just for its own internal consistency. > > Hmm, I get where you're coming from, but protecting a per-vCPU asset with > vcpu->mutex is completely sane/reasonable. Xen's refresh from a completely > different task is the oddball. Well yes, because that's what the gfn_to_pfn_cache is *for*, surely? If we were in a context where we could sleep and take mutexes like the vcpu mutex (and without deadlocking with a thread actually running that vCPU), then we could mostly just use kvm_write_guest(). The gfn_to_pfn_cache exists specifically to handle that 'oddball' case. > And unnecessarily taking multiple mutexes muddies > the water, e.g. it's not clear what role kvm->arch.xen.xen_lock plays when it's > acquired by kvm_xen_set_evtchn(). Right, I was frowning at that the other day. I believe it's there purely because the gfn_to_pfn_cache *wasn't* self-contained with its own consistent locking, and did this awful "caller must do my locking for me" thing. I'd like to fix the gfn_to_pfn_cache locking to be internally complete and consistent, then I think we probably don't need arch.xen.xen_lock in kvm_xen_set_evtchn(). I'm going to give that a lot more thought though and not attempt to shoe-horn it into this patch though.
Attachment:
smime.p7s
Description: S/MIME cryptographic signature