On 18.01.2022 16:00, Maciej S. Szmigiero wrote:
Hi Nikunj,
On 18.01.2022 12:06, Nikunj A Dadhania wrote:
From: Sean Christopherson <sean.j.christopherson@xxxxxxxxx>
Pin the memory for the data being passed to launch_update_data()
because it gets encrypted before the guest is first run and must
not be moved which would corrupt it.
Signed-off-by: Sean Christopherson <sean.j.christopherson@xxxxxxxxx>
[ * Changed hva_to_gva() to take an extra argument and return gpa_t.
* Updated sev_pin_memory_in_mmu() error handling.
* As pinning/unpining pages is handled within MMU, removed
{get,put}_user(). ]
Signed-off-by: Nikunj A Dadhania <nikunj@xxxxxxx>
---
arch/x86/kvm/svm/sev.c | 122 ++++++++++++++++++++++++++++++++++++++++-
1 file changed, 119 insertions(+), 3 deletions(-)
diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
index 14aeccfc500b..1ae714e83a3c 100644
--- a/arch/x86/kvm/svm/sev.c
+++ b/arch/x86/kvm/svm/sev.c
@@ -22,6 +22,7 @@
#include <asm/trapnr.h>
#include <asm/fpu/xcr.h>
+#include "mmu.h"
#include "x86.h"
#include "svm.h"
#include "svm_ops.h"
@@ -490,6 +491,110 @@ static unsigned long get_num_contig_pages(unsigned long idx,
return pages;
}
+#define SEV_PFERR_RO (PFERR_USER_MASK)
+#define SEV_PFERR_RW (PFERR_WRITE_MASK | PFERR_USER_MASK)
+
+static struct kvm_memory_slot *hva_to_memslot(struct kvm *kvm,
+ unsigned long hva)
+{
+ struct kvm_memslots *slots = kvm_memslots(kvm);
+ struct kvm_memory_slot *memslot;
+ int bkt;
+
+ kvm_for_each_memslot(memslot, bkt, slots) {
+ if (hva >= memslot->userspace_addr &&
+ hva < memslot->userspace_addr +
+ (memslot->npages << PAGE_SHIFT))
+ return memslot;
+ }
+
+ return NULL;
+}
We have kvm_for_each_memslot_in_hva_range() now, please don't do a linear
search through memslots.
You might need to move the aforementioned macro from kvm_main.c to some
header file, though.
Besides performance considerations I can't see the code here taking into
account the fact that a hva can map to multiple memslots (they an overlap
in the host address space).
Thanks,
Maciej