Patch "x86/mm/ident_map: Use gbpages only where full GB page should be mapped." has been added to the 6.10-stable tree

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



This is a note to let you know that I've just added the patch titled

    x86/mm/ident_map: Use gbpages only where full GB page should be mapped.

to the 6.10-stable tree which can be found at:
    http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     x86-mm-ident_map-use-gbpages-only-where-full-gb-page.patch
and it can be found in the queue-6.10 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@xxxxxxxxxxxxxxx> know about it.



commit 82b519ecb2fd30fb62d54b2fd0a31bd750389840
Author: Steve Wahl <steve.wahl@xxxxxxx>
Date:   Wed Jul 17 16:31:21 2024 -0500

    x86/mm/ident_map: Use gbpages only where full GB page should be mapped.
    
    [ Upstream commit cc31744a294584a36bf764a0ffa3255a8e69f036 ]
    
    When ident_pud_init() uses only GB pages to create identity maps, large
    ranges of addresses not actually requested can be included in the resulting
    table; a 4K request will map a full GB.  This can include a lot of extra
    address space past that requested, including areas marked reserved by the
    BIOS.  That allows processor speculation into reserved regions, that on UV
    systems can cause system halts.
    
    Only use GB pages when map creation requests include the full GB page of
    space.  Fall back to using smaller 2M pages when only portions of a GB page
    are included in the request.
    
    No attempt is made to coalesce mapping requests. If a request requires a
    map entry at the 2M (pmd) level, subsequent mapping requests within the
    same 1G region will also be at the pmd level, even if adjacent or
    overlapping such requests could have been combined to map a full GB page.
    Existing usage starts with larger regions and then adds smaller regions, so
    this should not have any great consequence.
    
    Signed-off-by: Steve Wahl <steve.wahl@xxxxxxx>
    Signed-off-by: Thomas Gleixner <tglx@xxxxxxxxxxxxx>
    Tested-by: Pavin Joseph <me@xxxxxxxxxxxxxxx>
    Tested-by: Sarah Brofeldt <srhb@xxxxxx>
    Tested-by: Eric Hagberg <ehagberg@xxxxxxxxx>
    Link: https://lore.kernel.org/all/20240717213121.3064030-3-steve.wahl@xxxxxxx
    Signed-off-by: Sasha Levin <sashal@xxxxxxxxxx>

diff --git a/arch/x86/mm/ident_map.c b/arch/x86/mm/ident_map.c
index 968d7005f4a72..a204a332c71fc 100644
--- a/arch/x86/mm/ident_map.c
+++ b/arch/x86/mm/ident_map.c
@@ -26,18 +26,31 @@ static int ident_pud_init(struct x86_mapping_info *info, pud_t *pud_page,
 	for (; addr < end; addr = next) {
 		pud_t *pud = pud_page + pud_index(addr);
 		pmd_t *pmd;
+		bool use_gbpage;
 
 		next = (addr & PUD_MASK) + PUD_SIZE;
 		if (next > end)
 			next = end;
 
-		if (info->direct_gbpages) {
-			pud_t pudval;
+		/* if this is already a gbpage, this portion is already mapped */
+		if (pud_leaf(*pud))
+			continue;
+
+		/* Is using a gbpage allowed? */
+		use_gbpage = info->direct_gbpages;
 
-			if (pud_present(*pud))
-				continue;
+		/* Don't use gbpage if it maps more than the requested region. */
+		/* at the begining: */
+		use_gbpage &= ((addr & ~PUD_MASK) == 0);
+		/* ... or at the end: */
+		use_gbpage &= ((next & ~PUD_MASK) == 0);
+
+		/* Never overwrite existing mappings */
+		use_gbpage &= !pud_present(*pud);
+
+		if (use_gbpage) {
+			pud_t pudval;
 
-			addr &= PUD_MASK;
 			pudval = __pud((addr - info->offset) | info->page_flag);
 			set_pud(pud, pudval);
 			continue;




[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux