On Sun, Mar 29, 2020 at 04:12:58PM +0200, Ard Biesheuvel wrote: > When CONFIG_DEBUG_ALIGN_RODATA is enabled, kernel segments mapped with > different permissions (r-x for .text, r-- for .rodata, rw- for .data, > etc) are rounded up to 2 MiB so they can be mapped more efficiently. > In particular, it permits the segments to be mapped using level 2 > block entries when using 4k pages, which is expected to result in less > TLB pressure. > > However, the mappings for the bulk of the kernel will use level 2 > entries anyway, and the misaligned fringes are organized such that they > can take advantage of the contiguous bit, and use far fewer level 3 > entries than would be needed otherwise. > > This makes the value of this feature dubious at best, and since it is not > enabled in defconfig or in the distro configs, it does not appear to be > in wide use either. So let's just remove it. > > Signed-off-by: Ard Biesheuvel <ardb@xxxxxxxxxx> Happy to take this patch via the arm64 tree for 5.7 (no new functionality), unless you want it to go with your other relocation login in the EFI stub patches. -- Catalin