On Wed, 2012-03-14 at 18:52 +1100, Benjamin Herrenschmidt wrote: > When the kernel calls into RTAS, it switches to 32-bit mode. The > magic page was is longer accessible in that case, causing the > patched instructions in the RTAS call wrapper to crash. > > This fixes it by making available a 32-bit mapping of the magic > page in that case. This mapping is flushed whenever we switch > the kernel back to 64-bit mode. I forgot to give credit to Alex for the original patch, which I tweaked a little bit (among others it was missing the bit in kvmppc_gfn_to_pfn) Cheers, Ben. > Signed-off-by: Benjamin Herrenschmidt <benh@xxxxxxxxxxxxxxxxxxx> > --- > > Avi, please consider merging ASAP as this is a fairly annoying > bug and the fix is reasonably obvious. > > arch/powerpc/kvm/book3s.c | 3 +++ > arch/powerpc/kvm/book3s_pr.c | 17 +++++++++++++++++ > 2 files changed, 20 insertions(+), 0 deletions(-) > > diff --git a/arch/powerpc/kvm/book3s.c b/arch/powerpc/kvm/book3s.c > index e41ac6f..34487d4 100644 > --- a/arch/powerpc/kvm/book3s.c > +++ b/arch/powerpc/kvm/book3s.c > @@ -289,6 +289,9 @@ pfn_t kvmppc_gfn_to_pfn(struct kvm_vcpu *vcpu, gfn_t gfn) > { > ulong mp_pa = vcpu->arch.magic_page_pa; > > + if (!(vcpu->arch.shared->msr & MSR_SF)) > + mp_pa = (uint32_t)mp_pa; > + > /* Magic page override */ > if (unlikely(mp_pa) && > unlikely(((gfn << PAGE_SHIFT) & KVM_PAM) == > diff --git a/arch/powerpc/kvm/book3s_pr.c b/arch/powerpc/kvm/book3s_pr.c > index d6851a1..23919d4 100644 > --- a/arch/powerpc/kvm/book3s_pr.c > +++ b/arch/powerpc/kvm/book3s_pr.c > @@ -137,6 +137,20 @@ void kvmppc_set_msr(struct kvm_vcpu *vcpu, u64 msr) > } > } > > + /* > + * When switching from 32 to 64-bit, we may have a stale 32-bit > + * magic page around, we need to flush it. Typically 32-bit magic > + * page will be instanciated when calling into RTAS. Note: We > + * assume that such transition only happens while in kernel mode, > + * ie, we never transition from user 32-bit to kernel 64-bit with > + * a 32-bit magic page around. > + */ > + if (!(old_msr & MSR_PR) && !(old_msr & MSR_SF) && (msr & MSR_SF)) { > + /* going from RTAS to normal kernel code */ > + kvmppc_mmu_pte_flush(vcpu, (uint32_t)vcpu->arch.magic_page_pa, > + ~0xFFFUL); > + } > + > /* Preload FPU if it's enabled */ > if (vcpu->arch.shared->msr & MSR_FP) > kvmppc_handle_ext(vcpu, BOOK3S_INTERRUPT_FP_UNAVAIL, MSR_FP); > @@ -242,6 +256,9 @@ static int kvmppc_visible_gfn(struct kvm_vcpu *vcpu, gfn_t gfn) > { > ulong mp_pa = vcpu->arch.magic_page_pa; > > + if (!(vcpu->arch.shared->msr & MSR_SF)) > + mp_pa = (uint32_t)mp_pa; > + > if (unlikely(mp_pa) && > unlikely((mp_pa & KVM_PAM) >> PAGE_SHIFT == gfn)) { > return 1; -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html