Re: [PATCH 3/3] KVM: PPC: Book3S: Make kvmppc_ld return a more accurate error indication

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 19.07.14 09:59, Paul Mackerras wrote:
At present, kvmppc_ld calls kvmppc_xlate, and if kvmppc_xlate returns
any error indication, it returns -ENOENT, which is taken to mean an
HPTE not found error.  However, the error could have been a segment
found (no SLB entry) or a permission error.  Similarly,
kvmppc_pte_to_hva currently does permission checking, but any error
from it is taken by kvmppc_ld to mean that the access is an emulated
MMIO access.  Also, kvmppc_ld does no execute permission checking.

This fixes these problems by (a) returning any error from kvmppc_xlate
directly, (b) moving the permission check from kvmppc_pte_to_hva
into kvmppc_ld, and (c) adding an execute permission check to kvmppc_ld.

This is similar to what was done for kvmppc_st() by commit 82ff911317c3
("KVM: PPC: Deflect page write faults properly in kvmppc_st").

Signed-off-by: Paul Mackerras <paulus@xxxxxxxxx>
---
  arch/powerpc/kvm/book3s.c | 25 ++++++++++++-------------
  1 file changed, 12 insertions(+), 13 deletions(-)

diff --git a/arch/powerpc/kvm/book3s.c b/arch/powerpc/kvm/book3s.c
index 31facfc..087f8f9 100644
--- a/arch/powerpc/kvm/book3s.c
+++ b/arch/powerpc/kvm/book3s.c
@@ -413,17 +413,10 @@ static hva_t kvmppc_bad_hva(void)
  	return PAGE_OFFSET;
  }
-static hva_t kvmppc_pte_to_hva(struct kvm_vcpu *vcpu, struct kvmppc_pte *pte,
-			       bool read)
+static hva_t kvmppc_pte_to_hva(struct kvm_vcpu *vcpu, struct kvmppc_pte *pte)
  {
  	hva_t hpage;
- if (read && !pte->may_read)
-		goto err;
-
-	if (!read && !pte->may_write)
-		goto err;
-
  	hpage = gfn_to_hva(vcpu->kvm, pte->raddr >> PAGE_SHIFT);
  	if (kvm_is_error_hva(hpage))
  		goto err;
@@ -462,15 +455,23 @@ int kvmppc_ld(struct kvm_vcpu *vcpu, ulong *eaddr, int size, void *ptr,
  {
  	struct kvmppc_pte pte;
  	hva_t hva = *eaddr;
+	int rc;
vcpu->stat.ld++; - if (kvmppc_xlate(vcpu, *eaddr, data, false, &pte))
-		goto nopte;
+	rc = kvmppc_xlate(vcpu, *eaddr, data, false, &pte);
+	if (rc)
+		return rc;
*eaddr = pte.raddr; - hva = kvmppc_pte_to_hva(vcpu, &pte, true);
+	if (!pte.may_read)
+		return -EPERM;
+
+	if (!data && !pte.may_execute)
+		return -ENOEXEC;

We should probably do a full audit of all that code and decide who is responsible for returning errors where. IIRC our MMU frontends already check pte.may* and return error codes respectively.

However, double-checking doesn't hurt for now, so I've applied this patch regardless.


Alex

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux