Re: [PATCH 4/4] arm/arm64: KVM: use kernel mapping to perform invalidation on page fault

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Jan 13, 2015 at 11:38:54AM +0000, Marc Zyngier wrote:
> On 12/01/15 20:10, Christoffer Dall wrote:
> > On Mon, Jan 12, 2015 at 09:58:30AM +0000, Marc Zyngier wrote:
> >> On 11/01/15 18:38, Christoffer Dall wrote:
> >>> On Sun, Jan 11, 2015 at 06:27:35PM +0000, Peter Maydell wrote:
> >>>> On 11 January 2015 at 17:58, Christoffer Dall
> >>>> <christoffer.dall@xxxxxxxxxx> wrote:
> >>>>> On Sun, Jan 11, 2015 at 05:37:52PM +0000, Peter Maydell wrote:
> >>>>>> On 11 January 2015 at 12:33, Christoffer Dall
> >>>>>> <christoffer.dall@xxxxxxxxxx> wrote:
> >>>>>>> On Fri, Jan 09, 2015 at 03:28:58PM +0000, Peter Maydell wrote:
> >>>>>>>> But implementations are allowed to hit in the cache even
> >>>>>>>> when the cache is disabled. In particular, setting the guest
> >>>>>>>
> >>>>>>> But how can it hit anything when the icache for the used VMID is
> >>>>>>> guaranteed to be clear (maybe that requires another full icache
> >>>>>>> invalidate for that VMID for PSCI reset)?
> >>>>>>
> >>>>>> The point is that at the moment we don't do anything to
> >>>>>> guarantee that we've cleared the icache.
> >>>>>
> >>>>> that's not entirely accurate, I assume all of the icache is
> >>>>> invalidated/cleaned at system bring-up time, and every time we re-use a
> >>>>> VMID (when we start a VMID rollover) we invalidate the entire icache.
> >>>>
> >>>> Right, but that doesn't catch the VM reset case, which is the
> >>>> one we're talking about.
> >>>>
> >>>
> >>> ok, so that's what you meant by warm reset, I see.
> >>>
> >>> Then I would think we should add that single invalidate on vcpu init
> >>> rather than flushing the icache on every page fault?
> >>>
> >>>>>> (Plus could there be
> >>>>>> stale data in the icache for this physical CPU for this VMID
> >>>>>> because we've run some other vCPU on it? Or does the process
> >>>>>> of rescheduling vCPUs across pCPUs and guest ASID management
> >>>>>> deal with that?)
> >>>>>
> >>>>> we don't clear the icache for vCPUs migrating onto other pCPUs but
> >>>>> invalidating the icache on a page fault won't guarantee that either.  Do
> >>>>> we really need to do that?
> >>>>
> >>>> I don't think we do, but I haven't thought through exactly
> >>>> why we don't yet :-)
> >>>>
> >>>
> >>> So once you start a secondary vCPU that one can then hit in the icache
> >>> from what the primary vCPU put there which I guess is different behavior
> >>> from a physical secondary core coming out of reset with the MMU off and
> >>> never hitting the icache, right?
> >>>
> >>> And is this not also a different behavior from a native system once the
> >>> vCPUs have turned their MMUs on, but we just don't happen to observe it
> >>> as being a problem?
> >>>
> >>> In any case, I don't have a great solution for how to solve this except
> >>> for always invalidating the icache when we migrate a vCPU to a pCPU, but
> >>> that's really nasty...
> >>
> >> No, it only needs to happen once per vcpu, on any CPU. IC IALLUIS is
> >> broadcast across CPUs, so once it has taken place on the first CPU this
> >> vcpu runs on, we're good.
> >>
> > But if you compare strictly to a native system, wouldn't a vCPU be able
> > to hit in the icache suddenly if migrated onto a pCPU that has run code
> > for the same VM (with the VMID) wihtout having tuned the MMU on?
> 
> Hmmm. Yes. ASID-tagged VIVT icache are really turning my brain into
> jelly, and that's not a pretty thing to see.
> 
> So we're back to your initial approach: Each time a vcpu is migrated to
> another CPU while its MMU/icache is off, we nuke the icache.
> 
> Do we want that in now, or do we keep that for later, when we actually
> see such an implementation?
> 
I don't think we want that in there now, if we can't test it anyway,
we're likely not to get it 100% correct.

Additionally, I haven't been able to think of a reasonable guest
scenario where this breaks.  Once the guest turns on its MMU it should
deal with the necessary icache invalidation itself (I think), so we're
really talking about situations where the stage-1 MMU is off, and I
gather that mostly you'll be seeing a single core doing any heavy
lifting and then secondary cores basically coming up, only seeing valid
entries in the icache, and doing the necessary invalidat+turn on mmu
stuff.

But I haven't spent days thinking about this yet.

-Christoffer
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux