On Wed, Mar 24, 2021 at 10:04 AM Brijesh Singh <brijesh.singh@xxxxxxx> wrote: > > If hardware detects an RMP violation, it will raise a page-fault exception > with the RMP bit set. To help the debug, dump the RMP entry of the faulting > address. > > Cc: Thomas Gleixner <tglx@xxxxxxxxxxxxx> > Cc: Ingo Molnar <mingo@xxxxxxxxxx> > Cc: Borislav Petkov <bp@xxxxxxxxx> > Cc: Joerg Roedel <jroedel@xxxxxxx> > Cc: "H. Peter Anvin" <hpa@xxxxxxxxx> > Cc: Tony Luck <tony.luck@xxxxxxxxx> > Cc: Dave Hansen <dave.hansen@xxxxxxxxx> > Cc: "Peter Zijlstra (Intel)" <peterz@xxxxxxxxxxxxx> > Cc: Paolo Bonzini <pbonzini@xxxxxxxxxx> > Cc: Tom Lendacky <thomas.lendacky@xxxxxxx> > Cc: David Rientjes <rientjes@xxxxxxxxxx> > Cc: Sean Christopherson <seanjc@xxxxxxxxxx> > Cc: x86@xxxxxxxxxx > Cc: kvm@xxxxxxxxxxxxxxx > Signed-off-by: Brijesh Singh <brijesh.singh@xxxxxxx> > --- > arch/x86/mm/fault.c | 75 +++++++++++++++++++++++++++++++++++++++++++++ > 1 file changed, 75 insertions(+) > > diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c > index f39b551f89a6..7605e06a6dd9 100644 > --- a/arch/x86/mm/fault.c > +++ b/arch/x86/mm/fault.c > @@ -31,6 +31,7 @@ > #include <asm/pgtable_areas.h> /* VMALLOC_START, ... */ > #include <asm/kvm_para.h> /* kvm_handle_async_pf */ > #include <asm/vdso.h> /* fixup_vdso_exception() */ > +#include <asm/sev-snp.h> /* lookup_rmpentry ... */ > > #define CREATE_TRACE_POINTS > #include <asm/trace/exceptions.h> > @@ -147,6 +148,76 @@ is_prefetch(struct pt_regs *regs, unsigned long error_code, unsigned long addr) > DEFINE_SPINLOCK(pgd_lock); > LIST_HEAD(pgd_list); > > +static void dump_rmpentry(struct page *page, rmpentry_t *e) > +{ > + unsigned long paddr = page_to_pfn(page) << PAGE_SHIFT; > + > + pr_alert("RMPEntry paddr 0x%lx [assigned=%d immutable=%d pagesize=%d gpa=0x%lx asid=%d " > + "vmsa=%d validated=%d]\n", paddr, rmpentry_assigned(e), rmpentry_immutable(e), > + rmpentry_pagesize(e), rmpentry_gpa(e), rmpentry_asid(e), rmpentry_vmsa(e), > + rmpentry_validated(e)); > + pr_alert("RMPEntry paddr 0x%lx %016llx %016llx\n", paddr, e->high, e->low); > +} > + > +static void show_rmpentry(unsigned long address) > +{ > + struct page *page = virt_to_page(address); This is an error path, and I don't think you have any particular guarantee that virt_to_page(address) is valid. Please add appropriate validation or use one of the slow lookup helpers.