Re: Fwd: [PATCH v9 13/16] ARM: KVM: Emulation framework and CP15 emulation

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 07/08/12 15:23, Avi Kivity wrote:
> On 08/07/2012 05:15 PM, Marc Zyngier wrote:
> 
>>> vcpu migration is supposed to be transparent.  What happens if you
>>> perform the operation locally, then the vcpu is migrated?
>>
>> Migrated as in "moved to another physical CPU"? We have a per-vcpu cpumask
>> indicating which CPU must perform a full cache clean/invalidate, which we
>> test in kvm_arch_vcpu_load().
>>
> 
> How is the cpumask maintained?  All cpus which were touched by the vcpu?

Each time we trap a cache invalidate by set/way (and only these), we
perform it on the the local CPU, and flag all other CPUs to nuke their
own caches if they ever run this vcpu (using the above cpumask).

Whenever this vcpu is schedule on another CPU, we execute this from
kvm_arch_vcpu_load():
/*
 * Check whether this vcpu requires the cache to be flushed on
 * this physical CPU. This is a consequence of doing dcache
 * operations by set/way on this vcpu. We do it here to be in
 * a non-preemptible section.
 */
if (cpumask_test_and_clear_cpu(cpu, &vcpu->arch.require_dcache_flush))
	flush_cache_all(); /* We'd really want v7_flush_dcache_all() */


> btw, x86 mostly ignores cache invalidates (an exception is device
> assignment).

We cannot afford this on ARM when executing an invalidate by set/way.
All the other cache maintenance operations are safe and run on the guest.

	M.
-- 
Jazz is not dead. It just smells funny...


_______________________________________________
kvmarm mailing list
kvmarm@xxxxxxxxxxxxxxxxxxxxx
https://lists.cs.columbia.edu/cucslists/listinfo/kvmarm


[Index of Archives]     [Linux KVM]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux