Re: [PATCH] KVM: pfncache: rework __kvm_gpc_refresh() to fix locking issues

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Jan 17, 2024, David Woodhouse wrote:
> On Wed, 2024-01-17 at 08:51 -0800, Sean Christopherson wrote:
> > On Wed, Jan 17, 2024, David Woodhouse wrote:
> > > On Wed, 2024-01-17 at 08:09 -0800, Sean Christopherson wrote:
> > > > On Fri, Jan 12, 2024, David Woodhouse wrote:
> > > > As you note above, some other mutex _should_ be held.  I think we should lean
> > > > into that.  E.g.
> > > 
> > > I don't. I'd like this code to stand alone *without* making the caller
> > > depend on "some other lock" just for its own internal consistency.
> > 
> > Hmm, I get where you're coming from, but protecting a per-vCPU asset with
> > vcpu->mutex is completely sane/reasonable.  Xen's refresh from a completely
> > different task is the oddball.  
> 
> Well yes, because that's what the gfn_to_pfn_cache is *for*, surely?
> 
> If we were in a context where we could sleep and take mutexes like the
> vcpu mutex (and without deadlocking with a thread actually running that
> vCPU), then we could mostly just use kvm_write_guest().
> 
> The gfn_to_pfn_cache exists specifically to handle that 'oddball' case.

No?  As I see it, the main role of the gpc code is to handle the mmu_notifier
interactions.  I am not at all convinced that the common code should take on
supporting "oh, and by the way, any task can you use the cache from any context".

That's the "oddball" I am referring to.  I'm not entirely opposed to making the
gpc code fully standalone, but I would like to at least get to the point where
have very high confidence that arch.xen.xen_lock can be fully excised before
committing to handling that use case in the common gpc code.

> > And unnecessarily taking multiple mutexes muddies
> > the water, e.g. it's not clear what role kvm->arch.xen.xen_lock plays when it's
> > acquired by kvm_xen_set_evtchn().
> 
> Right, I was frowning at that the other day. I believe it's there
> purely because the gfn_to_pfn_cache *wasn't* self-contained with its
> own consistent locking, and did this awful "caller must do my locking
> for me" thing.
> 
> I'd like to fix the gfn_to_pfn_cache locking to be internally complete
> and consistent, then I think we probably don't need arch.xen.xen_lock
> in kvm_xen_set_evtchn(). I'm going to give that a lot more thought
> though and not attempt to shoe-horn it into this patch though.
> 







[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux