Re: [Android-virt] [PATCH RFC v2 3/3] ARM: KVM: Add support for MMU notifiers

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 12.02.2012, at 02:12, Christoffer Dall <c.dall@xxxxxxxxxxxxxxxxxxxxxx> wrote:

> On Sat, Feb 11, 2012 at 10:33 AM, Antonios Motakis
> <a.motakis@xxxxxxxxxxxxxxxxxxxxxx> wrote:
>> On 02/11/2012 06:35 PM, Christoffer Dall wrote:
>>> 
>>> On Sat, Feb 11, 2012 at 7:00 AM, Antonios Motakis
>>> <a.motakis@xxxxxxxxxxxxxxxxxxxxxx>  wrote:
>>>> 
>>>> On 02/10/2012 11:22 PM, Marc Zyngier wrote:
>>>>> 
>>>>> +ENTRY(__kvm_tlb_flush_vmid)
>>>>> +       hvc     #0                      @ Switch to Hyp mode
>>>>> +       push    {r2, r3}
>>>>> 
>>>>> +       ldrd    r2, r3, [r0, #KVM_VTTBR]
>>>>> +       mcrr    p15, 6, r2, r3, c2      @ Write VTTBR
>>>>> +       isb
>>>>> +       mcr     p15, 0, r0, c8, c7, 0   @ TBLIALL
>>>>> +       dsb
>>>>> +       isb
>>>>> +       mov     r2, #0
>>>>> +       mov     r3, #0
>>>>> +       mcrr    p15, 6, r2, r3, c2      @ Back to VMID #0
>>>>> +       isb
>>>>> +
>>>>> +       pop     {r2, r3}
>>>>> +       hvc     #0                      @ Back to SVC
>>>>> +       mov     pc, lr
>>>>> +ENDPROC(__kvm_tlb_flush_vmid)
>>>> 
>>>> 
>>>> With the last VMID implementation, you could get the equivalent effect of
>>>> a
>>>> per-VMID flush, by just getting a new VMID for the current VM. So you
>>>> could
>>>> do a (kvm->arch.vmid = 0) to force a new VMID when the guest reruns, and
>>>> save the overhead of that flush (you will do a complete flush every 255
>>>> times instead of a small one every single time).
>>>> 
>>> to do this you would need to send an IPI if the guest is currently
>>> executing on another CPU and make it exit the guest, so that the VMID
>>> assignment will run before the guest potentially accesses that TLB
>>> entry that points to the page that was just reclaimed - which I am not
>>> sure will be better than this solution.
>> 
>> Don't you have to do this anyway? You'd want the flush to be effective on
>> all CPUs before proceeding.
> 
> hmm yeah, actually you do need this. Unless the -IS version of the
> flush instruction covers all relevant cores in this case. Marc, I
> don't think that the processor clearing out the page table entry will
> necessarily belong to the same inner-shareable domain as the processor
> potentially executing the VM, so therefore the -IS flushing version
> would not be sufficient and we actually have to go and send an IPI.
> 
> So, it sounds to me like:
> 1) we have to signal all vcpus using the VMID for which we are
> clearing page table entries
> 2) make sure that they, either
>    2a) flush their TLBs
>    2b) get a new VMID
> 
> seems like 2b might be slightly faster, but leaves more entries in the
> TLB that are then unused - not sure if that's a bad thing considering
> the replacement policy. Perhaps 2a is cleaner...

X86 basically does 2b, but has per-cpu tlb tags.

On PPC, we statically map the guest id to a guest atm.


Alex

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux