Re: [PATCH v6 03/38] KVM: x86: hyper-v: Introduce TLB flush fifo

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Maxim Levitsky <mlevitsk@xxxxxxxxxx> writes:

> On Mon, 2022-06-06 at 10:36 +0200, Vitaly Kuznetsov wrote:
>> To allow flushing individual GVAs instead of always flushing the
>> whole
>> VPID a per-vCPU structure to pass the requests is needed. Use
>> standard
>> 'kfifo' to queue two types of entries: individual GVA (GFN + up to
>> 4095
>> following GFNs in the lower 12 bits) and 'flush all'.
>
> Honestly I still don't think I understand why we can't just
> raise KVM_REQ_TLB_FLUSH_GUEST when the guest uses this interface
> to flush everthing, and then we won't need to touch the ring
> at all.

The main reason is that we need to know what to flush: L1 or
L2. E.g. for VMX, KVM_REQ_TLB_FLUSH_GUEST is basically

vpid_sync_context(vmx_get_current_vpid(vcpu));

which means that if the target vCPU transitions from L1 to L2 or vice
versa before KVM_REQ_TLB_FLUSH_GUEST gets processed we will flush the
wrong VPID. And actually the writer (the vCPU which processes the TLB
flush hypercall) is not anyhow synchronized with the reader (the vCPU
whose TLB needs to be flushed) here so we can't even know if the target
vCPU is in guest more or not.

With the newly added KVM_REQ_HV_TLB_FLUSH, we always look at the
corresponding FIFO and process 'flush all' accordingly. In case the vCPU
switches between modes, we always raise KVM_REQ_HV_TLB_FLUSH request to
make sure we check. Note: we can't be raising KVM_REQ_TLB_FLUSH_GUEST
instead as it always means 'full tlb flush' and we certainly don't want
that.

-- 
Vitaly




[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux