On 07/16/2012 10:46 PM, Marc Zyngier wrote: > On 16 Jul 2012, at 15:39, Avi Kivity <avi at redhat.com> wrote: > >> On 07/16/2012 05:12 PM, Christoffer Dall wrote: >>>> >>>> And you said the reason of disabling preemption as CPU-specific data such as caches. >>>> But as far as I know, the l1 caches are coherent. >>>> (http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.ddi0438e/BABFHDFE.html) >>>> >>>> Can you please explain why preemption disable is necessary in detail? >>>> >>> >>> if a VM tries to do a cache maintenance operation specific to that CPU >>> that traps we want to make sure that the emulation of such operations >>> happen on the same physical CPU to ensure correct semantics. >> >> Can you give an example of those cache maintenance operations? > > When the guest does cache maintenance by set/way, these operations must occur on the local CPU, and only there.. To ensure they get propagated, we trap these, execute the operation on the current CPU and put a flag to nuke the caches on the other CPUs next time they run this vcpu. Yes, but what are those cache maintenance operations? Invalidates? I come from the x86 world where the only maintenance you do is enable the cache and flush it if you change the physical memory map. > >> Seems to me that whatever operation you do has to survive vcpu >> migration. So there should be some big hammer after vcpu migration to >> cause everything to be synchronized. Given that, you can do everything >> with preemption enabled, and trust the migration handler to fix things >> up if you were preempted. >> >> (of course the operation itself may need to be locally unpreemptible if >> you touch multiple registers, but that still allows you to run most >> handlers with preemption enabled, like most archs do). > > It may be that we need two classes of handlers then. Maybe. x86 also special cases NMI since it needs to run on the same cpu. -- error compiling committee.c: too many arguments to function