On 18.01.2013, at 01:11, Scott Wood wrote: > On 01/17/2013 04:50:39 PM, Alexander Graf wrote: >> @@ -1024,9 +1001,11 @@ void kvmppc_mmu_map(struct kvm_vcpu *vcpu, u64 eaddr, gpa_t gpaddr, >> { >> struct kvmppc_vcpu_e500 *vcpu_e500 = to_e500(vcpu); >> struct tlbe_priv *priv; >> - struct kvm_book3e_206_tlb_entry *gtlbe, stlbe; >> + struct kvm_book3e_206_tlb_entry *gtlbe, stlbe = {}; > > Is there a code path in which stlbe gets used but not fully filled in > without this? I am hoping not, but when I wrote this patch gcc suddenly jumped at me claiming that the whole struct can get used uninitialized: arch/powerpc/kvm/e500_mmu_host.c: In function ‘kvmppc_mmu_map’: arch/powerpc/kvm/e500_mmu_host.c:533: error: ‘stlbe.mas1’ may be used uninitialized in this function arch/powerpc/kvm/e500_mmu_host.c:533: error: ‘stlbe.mas2’ may be used uninitialized in this function arch/powerpc/kvm/e500_mmu_host.c:533: error: ‘stlbe.mas7_3’ may be used uninitialized in this function arch/powerpc/kvm/e500_mmu_host.c:533: error: ‘stlbe.mas8’ may be used uninitialized in this function If you have any idea where this could come from, please let me know :). > >> int tlbsel = tlbsel_of(index); >> int esel = esel_of(index); >> + /* Needed for initial map, where we can't use the cached value */ >> + int force_map = index & KVM_E500_INDEX_FORCE_MAP; >> int stlbsel, sesel; >> gtlbe = get_entry(vcpu_e500, tlbsel, esel); >> @@ -1038,7 +1017,7 @@ void kvmppc_mmu_map(struct kvm_vcpu *vcpu, u64 eaddr, gpa_t gpaddr, >> priv = &vcpu_e500->gtlb_priv[tlbsel][esel]; >> /* Only triggers after clear_tlb_refs */ >> - if (unlikely(!(priv->ref.flags & E500_TLB_VALID))) >> + if (force_map || unlikely(!(priv->ref.flags & E500_TLB_VALID))) >> kvmppc_e500_tlb0_map(vcpu_e500, esel, &stlbe); >> else >> kvmppc_e500_setup_stlbe(vcpu, gtlbe, BOOK3E_PAGESZ_4K, > > It seems a bit odd to overload index rather than adding a flags > parameter... Yeah, I mostly wanted to refrain from touching 440 code. But if you prefer that, I can certainly do so :). > It also seems like it would be cleaner to just invalidate the old entry > in tlbwe, and then this function doesn't need to change at all. I am a > bit confused by how invalidation is currently operating -- why is > E500_TLB_VALID not cleared on invalidations (except for MMU API stuff and > MMU notifiers)? Consider me as confused as you are. Alex -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html