On Fri, Sep 21, 2018 at 08:01:58PM +1000, Paul Mackerras wrote: > From: Suraj Jitindar Singh <sjitindarsingh@xxxxxxxxx> > > This is only done at level 0, since only level 0 knows which physical > CPU a vcpu is running on. This does for nested guests what L0 already > did for its own guests, which is to flush the TLB on a pCPU when it > goes to run a vCPU there, and there is another vCPU in the same VM > which previously ran on this pCPU and has now started to run on another > pCPU. This is to handle the situation where the other vCPU touched > a mapping, moved to another pCPU and did a tlbiel (local-only tlbie) > on that new pCPU and thus left behind a stale TLB entry on this pCPU. > > Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@xxxxxxxxx> > Signed-off-by: Paul Mackerras <paulus@xxxxxxxxxx> Reviewed-by: David Gibson <david@xxxxxxxxxxxxxxxxxxxxx> [snip] > static void radix_flush_cpu(struct kvm *kvm, int cpu, struct kvm_vcpu *vcpu) > { > + struct kvm_nested_guest *nested = vcpu->arch.nested; > + cpumask_t *cpu_in_guest; > int i; > > cpu = cpu_first_thread_sibling(cpu); > - cpumask_set_cpu(cpu, &kvm->arch.need_tlb_flush); > + if (nested) { > + cpumask_set_cpu(cpu, &nested->need_tlb_flush); > + cpu_in_guest = &nested->cpu_in_guest; > + } else { > + cpumask_set_cpu(cpu, &kvm->arch.need_tlb_flush); > + cpu_in_guest = &kvm->arch.cpu_in_guest; > + } I don't think it's important for now, but a possible followup might be to make a "lpid_state" or something structure for the information that's common between a direct guest and a nested guest. That might collapse a bunch of these if (nested) tests. -- David Gibson | I'll have my music baroque, and my code david AT gibson.dropbear.id.au | minimalist, thank you. NOT _the_ _other_ | _way_ _around_! http://www.ozlabs.org/~dgibson
Attachment:
signature.asc
Description: PGP signature