Re: [PATCH 4/4] arm/arm64: KVM: use kernel mapping to perform invalidation on page fault

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sun, Jan 11, 2015 at 06:27:35PM +0000, Peter Maydell wrote:
> On 11 January 2015 at 17:58, Christoffer Dall
> <christoffer.dall@xxxxxxxxxx> wrote:
> > On Sun, Jan 11, 2015 at 05:37:52PM +0000, Peter Maydell wrote:
> >> On 11 January 2015 at 12:33, Christoffer Dall
> >> <christoffer.dall@xxxxxxxxxx> wrote:
> >> > On Fri, Jan 09, 2015 at 03:28:58PM +0000, Peter Maydell wrote:
> >> >> But implementations are allowed to hit in the cache even
> >> >> when the cache is disabled. In particular, setting the guest
> >> >
> >> > But how can it hit anything when the icache for the used VMID is
> >> > guaranteed to be clear (maybe that requires another full icache
> >> > invalidate for that VMID for PSCI reset)?
> >>
> >> The point is that at the moment we don't do anything to
> >> guarantee that we've cleared the icache.
> >
> > that's not entirely accurate, I assume all of the icache is
> > invalidated/cleaned at system bring-up time, and every time we re-use a
> > VMID (when we start a VMID rollover) we invalidate the entire icache.
> 
> Right, but that doesn't catch the VM reset case, which is the
> one we're talking about.
> 

ok, so that's what you meant by warm reset, I see.

Then I would think we should add that single invalidate on vcpu init
rather than flushing the icache on every page fault?

> >> (Plus could there be
> >> stale data in the icache for this physical CPU for this VMID
> >> because we've run some other vCPU on it? Or does the process
> >> of rescheduling vCPUs across pCPUs and guest ASID management
> >> deal with that?)
> >
> > we don't clear the icache for vCPUs migrating onto other pCPUs but
> > invalidating the icache on a page fault won't guarantee that either.  Do
> > we really need to do that?
> 
> I don't think we do, but I haven't thought through exactly
> why we don't yet :-)
> 

So once you start a secondary vCPU that one can then hit in the icache
from what the primary vCPU put there which I guess is different behavior
from a physical secondary core coming out of reset with the MMU off and
never hitting the icache, right?

And is this not also a different behavior from a native system once the
vCPUs have turned their MMUs on, but we just don't happen to observe it
as being a problem?

In any case, I don't have a great solution for how to solve this except
for always invalidating the icache when we migrate a vCPU to a pCPU, but
that's really nasty...

-Christoffer
_______________________________________________
kvmarm mailing list
kvmarm@xxxxxxxxxxxxxxxxxxxxx
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm



[Index of Archives]     [Linux KVM]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux