Re: [RFC PATCH 0/11] Rework gfn_to_pfn_cache

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 16 November 2021 18:46:03 GMT, Paolo Bonzini <bonzini@xxxxxxx> wrote:
>On 11/16/21 18:57, David Woodhouse wrote:
>>>> +		read_lock(&gpc->lock);
>>>> +		if (!kvm_gfn_to_pfn_cache_check(vcpu->kvm, gpc, gpc->gpa, PAGE_SIZE)) {
>>>> +			read_unlock(&gpc->lock);
>>>>    			goto mmio_needed;
>>>> +		}
>>>> +
>>>> +		vapic_page = gpc->khva;
>>> If we know this gpc is of the synchronous kind, I think we can skip the
>>> read_lock/read_unlock here?!?
>> Er... this one was OUTSIDE_GUEST_MODE and is the atomic kind, which
>> means it needs to hold the lock for the duration of the access in order
>> to prevent (preemption and) racing with the invalidate?
>> 
>> It's the IN_GUEST_MODE one (in my check_guest_maps()) where we might
>> get away without the lock, perhaps?
>
>Ah, this is check_nested_events which is mostly IN_GUEST_MODE but not 
>always (and that sucks for other reasons).  I'll think a bit more about 
>it when I actually do the work.
>
>>>>    		__kvm_apic_update_irr(vmx->nested.pi_desc->pir,
>>>>    			vapic_page, &max_irr);
>>>> @@ -3749,6 +3783,7 @@ static int vmx_complete_nested_posted_interrupt(struct kvm_vcpu *vcpu)
>>>>    			status |= (u8)max_irr;
>>>>    			vmcs_write16(GUEST_INTR_STATUS, status);
>>>>    		}
>>>> +		read_unlock(&gpc->lock);
>>>>    	}
>>>>    
>> I just realised that the mark_page_dirty() on invalidation and when the
>> the irqfd workqueue refreshes the gpc might fall foul of the same
>> dirty_ring problem that I belatedly just spotted with the Xen shinfo
>> clock write. I'll fix it up to*always*  require a vcpu (to be
>> associated with the writes), and reinstate the guest_uses_pa flag since
>> that can no longer in implicit in (vcpu!=NULL).
>
>Okay.
>
>> I may leave the actual task of fixing nesting to you, if that's OK, as
>> long as we consider the new gfn_to_pfn_cache sufficient to address the
>> problem? I think it's mostly down to how we*use*  it now, rather than
>> the fundamental design of cache itself?
>
>Yes, I agree.  Just stick whatever you have for nesting as an extra 
>patch at the end, and I'll take it from there.


Will do; thanks. I believe you said you'd already merged a batch including killing the first three of the nesting kvm_vcpu_map() users, so I'll wait for those to show up in a tree I can pull from and then post a single series with the unified Makefile.kvm bits followed by pfncache.c and the Xen event channel support using it. And as requested, ending with the nested pfncache one I posted a couple of messages ago.

-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.




[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux