Re: [PATCH 4/4] arm/arm64: KVM: use kernel mapping to perform invalidation on page fault

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Jan 12, 2015 at 09:58:30AM +0000, Marc Zyngier wrote:
> On 11/01/15 18:38, Christoffer Dall wrote:
> > On Sun, Jan 11, 2015 at 06:27:35PM +0000, Peter Maydell wrote:
> >> On 11 January 2015 at 17:58, Christoffer Dall
> >> <christoffer.dall@xxxxxxxxxx> wrote:
> >>> On Sun, Jan 11, 2015 at 05:37:52PM +0000, Peter Maydell wrote:
> >>>> On 11 January 2015 at 12:33, Christoffer Dall
> >>>> <christoffer.dall@xxxxxxxxxx> wrote:
> >>>>> On Fri, Jan 09, 2015 at 03:28:58PM +0000, Peter Maydell wrote:
> >>>>>> But implementations are allowed to hit in the cache even
> >>>>>> when the cache is disabled. In particular, setting the guest
> >>>>>
> >>>>> But how can it hit anything when the icache for the used VMID is
> >>>>> guaranteed to be clear (maybe that requires another full icache
> >>>>> invalidate for that VMID for PSCI reset)?
> >>>>
> >>>> The point is that at the moment we don't do anything to
> >>>> guarantee that we've cleared the icache.
> >>>
> >>> that's not entirely accurate, I assume all of the icache is
> >>> invalidated/cleaned at system bring-up time, and every time we re-use a
> >>> VMID (when we start a VMID rollover) we invalidate the entire icache.
> >>
> >> Right, but that doesn't catch the VM reset case, which is the
> >> one we're talking about.
> >>
> > 
> > ok, so that's what you meant by warm reset, I see.
> > 
> > Then I would think we should add that single invalidate on vcpu init
> > rather than flushing the icache on every page fault?
> > 
> >>>> (Plus could there be
> >>>> stale data in the icache for this physical CPU for this VMID
> >>>> because we've run some other vCPU on it? Or does the process
> >>>> of rescheduling vCPUs across pCPUs and guest ASID management
> >>>> deal with that?)
> >>>
> >>> we don't clear the icache for vCPUs migrating onto other pCPUs but
> >>> invalidating the icache on a page fault won't guarantee that either.  Do
> >>> we really need to do that?
> >>
> >> I don't think we do, but I haven't thought through exactly
> >> why we don't yet :-)
> >>
> > 
> > So once you start a secondary vCPU that one can then hit in the icache
> > from what the primary vCPU put there which I guess is different behavior
> > from a physical secondary core coming out of reset with the MMU off and
> > never hitting the icache, right?
> > 
> > And is this not also a different behavior from a native system once the
> > vCPUs have turned their MMUs on, but we just don't happen to observe it
> > as being a problem?
> > 
> > In any case, I don't have a great solution for how to solve this except
> > for always invalidating the icache when we migrate a vCPU to a pCPU, but
> > that's really nasty...
> 
> No, it only needs to happen once per vcpu, on any CPU. IC IALLUIS is
> broadcast across CPUs, so once it has taken place on the first CPU this
> vcpu runs on, we're good.
> 
But if you compare strictly to a native system, wouldn't a vCPU be able
to hit in the icache suddenly if migrated onto a pCPU that has run code
for the same VM (with the VMID) wihtout having tuned the MMU on?

-Christoffer
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux