On 03.06.2011, at 01:17, Scott Wood wrote: > From: Liu Yu <yu.liu@xxxxxxxxxxxxx> > > Dynamically assign host PIDs to guest PIDs, splitting each guest PID into > multiple host (shadow) PIDs based on kernel/user and MSR[IS/DS]. Use > both PID0 and PID1 so that the shadow PIDs for the right mode can be > selected, that correspond both to guest TID = zero and guest TID = guest > PID. > > This allows us to significantly reduce the frequency of needing to > invalidate the entire TLB. When the guest mode or PID changes, we just > update the host PID0/PID1. And since the allocation of shadow PIDs is > global, multiple guests can share the TLB without conflict. > > Note that KVM does not yet support the guest setting PID1 or PID2 to > a value other than zero. This will need to be fixed for nested KVM > to work. Until then, we enforce the requirement for guest PID1/PID2 > to stay zero by failing the emulation if the guest tries to set them > to something else. > > Signed-off-by: Liu Yu <yu.liu@xxxxxxxxxxxxx> > Signed-off-by: Scott Wood <scottwood@xxxxxxxxxxxxx> > --- > arch/powerpc/include/asm/kvm_e500.h | 8 +- > arch/powerpc/include/asm/kvm_host.h | 1 + > arch/powerpc/kernel/asm-offsets.c | 1 + > arch/powerpc/kvm/44x_tlb.c | 4 +- > arch/powerpc/kvm/booke.c | 11 +- > arch/powerpc/kvm/booke.h | 1 + > arch/powerpc/kvm/booke_interrupts.S | 11 ++ > arch/powerpc/kvm/e500_emulate.c | 4 + > arch/powerpc/kvm/e500_tlb.c | 312 ++++++++++++++++++++++++++++++++--- > arch/powerpc/kvm/e500_tlb.h | 13 ++- > 10 files changed, 334 insertions(+), 32 deletions(-) > > [...] > @@ -149,15 +342,76 @@ void kvmppc_map_magic(struct kvm_vcpu *vcpu) > magic.mas7 = pfn >> (32 - PAGE_SHIFT); > > __write_host_tlbe(&magic, MAS0_TLBSEL(1) | MAS0_ESEL(tlbcam_index)); > + preempt_enable(); > } > > void kvmppc_e500_tlb_load(struct kvm_vcpu *vcpu, int cpu) > { > + struct kvmppc_vcpu_e500 *vcpu_e500 = to_e500(vcpu); > + > + /* Shadow PID may be expired on local core */ > + kvmppc_e500_recalc_shadow_pid(vcpu_e500); > } > > void kvmppc_e500_tlb_put(struct kvm_vcpu *vcpu) > { > - _tlbil_all(); > +} > + > +static void kvmppc_e500_stlbe_invalidate(struct kvmppc_vcpu_e500 *vcpu_e500, > + int tlbsel, int esel) > +{ > + struct tlbe *gtlbe = &vcpu_e500->gtlb_arch[tlbsel][esel]; > + struct vcpu_id_table *idt = vcpu_e500->idt; > + unsigned int pr, tid, ts, pid; > + u32 val, eaddr; > + unsigned long flags; > + > + ts = get_tlb_ts(gtlbe); > + tid = get_tlb_tid(gtlbe); > + > + preempt_disable(); > + > + /* One guest ID may be mapped to two shadow IDs */ > + for (pr = 0; pr < 2; pr++) { > + /* > + * The shadow PID can have a valid mapping on at most one > + * host CPU. In the common case, it will be valid on this Not sure I understand this part. Who ensures that a shadow pid is only valid on a single CPU? From what I see, the only guarantee is that there's at most one shadow pid for each AS-PID-PR combination per CPU. But that means the invalidate could very well reach out to a different CPU as well, no? Also, the shadow PID could be valid on multiple cores even. So if the guest is on CPU A, maps a shadow pid, then migrates to CPU B, uses another shadow PID, we have 2 mappings active, right? > + * CPU, in which case (for TLB0) we do a local invalidation > + * of the specific address. > + * > + * If the shadow PID is not valid on the current host CPU, or > + * if we're invalidating a TLB1 entry, we invalidate the > + * entire shadow PID. > + */ > + if (tlbsel == 1 || > + (pid = local_sid_lookup(&idt->id[ts][tid][pr])) <= 0) { > + kvmppc_e500_id_table_reset_one(vcpu_e500, ts, tid, pr); > + continue; > + } > + > + /* > + * The guest is invalidating a TLB0 entry which in in a PID in in? Alex -- To unsubscribe from this list: send the line "unsubscribe kvm-ppc" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html