pkram kaslr code can incur multiple page faults when it walks its preserved ranges list called via mem_avoid_overlap(). The multiple faults can easily end up using up the small number of pages available to be allocated for page table pages. This patch hacks things so that mappings are 1GB which results in the need for far fewer page table pages. As is this breaks AMD SEV-ES which expects the mappings to be 2M. This could possibly be fixed by updating split code to split 1GB page if the aren't any other issues with using 1GB mappings. Signed-off-by: Anthony Yznaga <anthony.yznaga@xxxxxxxxxx> --- arch/x86/boot/compressed/ident_map_64.c | 9 +++++---- 1 file changed, 5 insertions(+), 4 deletions(-) diff --git a/arch/x86/boot/compressed/ident_map_64.c b/arch/x86/boot/compressed/ident_map_64.c index f7213d0943b8..6ff02da4cc1a 100644 --- a/arch/x86/boot/compressed/ident_map_64.c +++ b/arch/x86/boot/compressed/ident_map_64.c @@ -95,8 +95,8 @@ static void add_identity_map(unsigned long start, unsigned long end) int ret; /* Align boundary to 2M. */ - start = round_down(start, PMD_SIZE); - end = round_up(end, PMD_SIZE); + start = round_down(start, PUD_SIZE); + end = round_up(end, PUD_SIZE); if (start >= end) return; @@ -119,6 +119,7 @@ void initialize_identity_maps(void *rmode) mapping_info.context = &pgt_data; mapping_info.page_flag = __PAGE_KERNEL_LARGE_EXEC | sme_me_mask; mapping_info.kernpg_flag = _KERNPG_TABLE; + mapping_info.direct_gbpages = true; /* * It should be impossible for this not to already be true, @@ -329,8 +330,8 @@ void do_boot_page_fault(struct pt_regs *regs, unsigned long error_code) ghcb_fault = sev_es_check_ghcb_fault(address); - address &= PMD_MASK; - end = address + PMD_SIZE; + address &= PUD_MASK; + end = address + PUD_SIZE; /* * Check for unexpected error codes. Unexpected are: -- 1.8.3.1 _______________________________________________ kexec mailing list kexec@xxxxxxxxxxxxxxxxxxx http://lists.infradead.org/mailman/listinfo/kexec