El 14/12/15 a les 16.27, Konrad Rzeszutek Wilk ha escrit: > On Sat, Dec 12, 2015 at 07:25:55PM -0500, Boris Ostrovsky wrote: >> Using MMUEXT_TLB_FLUSH_MULTI doesn't buy us much since the hypervisor >> will likely perform same IPIs as would have the guest. >> > > But if the VCPU is asleep, doing it via the hypervisor will save us waking > up the guest VCPU, sending an IPI - just to do an TLB flush > of that CPU. Which is pointless as the CPU hadn't been running the > guest in the first place. > >> >> More importantly, using MMUEXT_INVLPG_MULTI may not to invalidate the >> guest's address on remote CPU (when, for example, VCPU from another >> guest >> is running there). > > Right, so the hypervisor won't even send an IPI there. > > But if you do it via the normal guest IPI mechanism (which are opaque > to the hypervisor) you and up scheduling the guest VCPU to do > send an hypervisor callback. And the callback will go the IPI routine > which will do an TLB flush. Not necessary. > > This is all in case of oversubscription of course. In the case where > we are fine on vCPU resources it does not matter. > > Perhaps if we have PV aware TLB flush it could do this differently? Why don't HVM/PVH just uses the HVMOP_flush_tlbs hypercall? Roger. -- To unsubscribe from this list: send the line "unsubscribe stable" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html