Re: [PATCH Part2 v6 09/49] x86/fault: Add support to handle the RMP fault for user address

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Aug 09, 2022 at 06:55:43PM +0200, Borislav Petkov wrote:
> On Mon, Jun 20, 2022 at 11:03:43PM +0000, Ashish Kalra wrote:
> > +	pfn = pte_pfn(*pte);
> > +
> > +	/* If its large page then calculte the fault pfn */
> > +	if (level > PG_LEVEL_4K) {
> > +		unsigned long mask;
> > +
> > +		mask = pages_per_hpage(level) - pages_per_hpage(level - 1);
> > +		pfn |= (address >> PAGE_SHIFT) & mask;
> 
> Oh boy, this is unnecessarily complicated. Isn't this
> 
> 	pfn |= pud_index(address);
> 
> or
> 	pfn |= pmd_index(address);

I played with this a bit and ended up with

        pfn = pte_pfn(*pte) | PFN_DOWN(address & page_level_mask(level - 1));

Unless I got something terribly wrong, this should do the
same (see the attached patch) as the existing calculations.

BR, Jarkko
>From c92522f6199055cd609ddd785dc9d8e85153e3b4 Mon Sep 17 00:00:00 2001
From: Jarkko Sakkinen <jarkko@xxxxxxxxxxx>
Date: Tue, 6 Sep 2022 09:51:59 +0300
Subject: [PATCH] x86/fault: Simplify PFN calculation in
 handle_user_rmp_fault()

Use functions in asm/pgtable.h to calculate the PFN for the address inside
PTE's page directory. PG_LEVEL_4K PTE's obviously do not have a page
directory but it is not an issue as:

	page_level_mask(PG_LEVEL_4K - 1) ==
	page_level_mask(PG_LEVEL_NONE) ==
	0

Signed-off-by: Jarkko Sakkinen <jarkko@xxxxxxxxxxx>
---
 arch/x86/mm/fault.c | 16 ++--------------
 1 file changed, 2 insertions(+), 14 deletions(-)

diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c
index 6404ef73eb56..28b3f80611a3 100644
--- a/arch/x86/mm/fault.c
+++ b/arch/x86/mm/fault.c
@@ -1219,11 +1219,6 @@ do_kern_addr_fault(struct pt_regs *regs, unsigned long hw_error_code,
 }
 NOKPROBE_SYMBOL(do_kern_addr_fault);
 
-static inline size_t pages_per_hpage(int level)
-{
-	return page_level_size(level) / PAGE_SIZE;
-}
-
 /*
  * Return 1 if the caller need to retry, 0 if it the address need to be split
  * in order to resolve the fault.
@@ -1248,15 +1243,8 @@ static int handle_user_rmp_page_fault(struct pt_regs *regs, unsigned long error_
 	if (!pte || !pte_present(*pte))
 		return 1;
 
-	pfn = pte_pfn(*pte);
-
-	/* If its large page then calculte the fault pfn */
-	if (level > PG_LEVEL_4K) {
-		unsigned long mask;
-
-		mask = pages_per_hpage(level) - pages_per_hpage(level - 1);
-		pfn |= (address >> PAGE_SHIFT) & mask;
-	}
+	/*  Calculate PFN inside the page directory: */
+	pfn = pte_pfn(*pte) | PFN_DOWN(address & page_level_mask(level - 1));
 
 	/*
 	 * If its a guest private page, then the fault cannot be resolved.
-- 
2.37.2


[Index of Archives]     [Kernel]     [Gnu Classpath]     [Gnu Crypto]     [DM Crypt]     [Netfilter]     [Bugtraq]
  Powered by Linux