On Fri, Oct 05, 2018 at 02:09:08PM +1000, David Gibson wrote: > On Thu, Oct 04, 2018 at 09:56:02PM +1000, Paul Mackerras wrote: > > From: Suraj Jitindar Singh <sjitindarsingh@xxxxxxxxx> > > > > This is only done at level 0, since only level 0 knows which physical > > CPU a vcpu is running on. This does for nested guests what L0 already > > did for its own guests, which is to flush the TLB on a pCPU when it > > goes to run a vCPU there, and there is another vCPU in the same VM > > which previously ran on this pCPU and has now started to run on another > > pCPU. This is to handle the situation where the other vCPU touched > > a mapping, moved to another pCPU and did a tlbiel (local-only tlbie) > > on that new pCPU and thus left behind a stale TLB entry on this pCPU. > > > > This introduces a limit on the the vcpu_token values used in the > > H_ENTER_NESTED hcall -- they must now be less than NR_CPUS. > > This does make the vcpu tokens no longer entirely opaque to the L0. > It works for now, because the only L1 is Linux and we know basically > how it allocates those tokens. Eventually we probably want some way > to either remove this restriction or to advertise the limit to the L1. Right, we could use something like a hash table and have it be basically just as efficient as the array when the set of IDs is dense while also handling arbitrary ID values. (We'd have to make sure that L1 couldn't trigger unbounded memory consumption in L0, though.) Paul.