Re: [PATCH 3/3] KVM: PPC: e500: Implement TLB1-in-TLB0 mapping

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 01/17/2013 04:50:41 PM, Alexander Graf wrote:
When a host mapping fault happens in a guest TLB1 entry today, we
map the translated guest entry into the host's TLB1.

This isn't particularly clever when the guest is mapped by normal 4k
pages, since these would be a lot better to put into TLB0 instead.

This patch adds the required logic to map 4k TLB1 shadow maps into
the host's TLB0.

Signed-off-by: Alexander Graf <agraf@xxxxxxx>
---
 arch/powerpc/kvm/e500.h          |    1 +
arch/powerpc/kvm/e500_mmu_host.c | 58 +++++++++++++++++++++++++++++--------
 2 files changed, 46 insertions(+), 13 deletions(-)

diff --git a/arch/powerpc/kvm/e500.h b/arch/powerpc/kvm/e500.h
index 00f96d8..d32e6a8 100644
--- a/arch/powerpc/kvm/e500.h
+++ b/arch/powerpc/kvm/e500.h
@@ -28,6 +28,7 @@

 #define E500_TLB_VALID 1
 #define E500_TLB_BITMAP 2
+#define E500_TLB_TLB0		(1 << 2)

 struct tlbe_ref {
 	pfn_t pfn;
diff --git a/arch/powerpc/kvm/e500_mmu_host.c b/arch/powerpc/kvm/e500_mmu_host.c
index 3bb2154..cbb6cf8 100644
--- a/arch/powerpc/kvm/e500_mmu_host.c
+++ b/arch/powerpc/kvm/e500_mmu_host.c
@@ -198,6 +198,11 @@ void inval_gtlbe_on_host(struct kvmppc_vcpu_e500 *vcpu_e500, int tlbsel,
 		local_irq_restore(flags);

 		return;
+	} else if (tlbsel == 1 &&
+ vcpu_e500->gtlb_priv[1][esel].ref.flags & E500_TLB_TLB0) {
+		/* This is a slow path, so just invalidate everything */
+		kvmppc_e500_tlbil_all(vcpu_e500);
+ vcpu_e500->gtlb_priv[1][esel].ref.flags &= ~E500_TLB_TLB0;
 	}

What if the guest TLB1 entry is backed by a mix of TLB0 and TLB1 entries on the host? I don't see checks elsewhere that would prevent this situation.

@@ -529,9 +556,14 @@ void kvmppc_mmu_map(struct kvm_vcpu *vcpu, u64 eaddr, gpa_t gpaddr,
 	case 1: {
 		gfn_t gfn = gpaddr >> PAGE_SHIFT;

-		stlbsel = 1;
 		sesel = kvmppc_e500_tlb1_map(vcpu_e500, eaddr, gfn,
 					     gtlbe, &stlbe, esel);
+		if (sesel < 0) {
+			/* TLB0 mapping */
+			sesel = 0;
+			stlbsel = 0;
+		} else
+			stlbsel = 1;
 		break;
 	}

Maybe push the call to write_tlbe() into the tlb0/1_map functions, getting rid of the need to pass sesel/stlbsel/stlbe back?

-Scott
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux