On Sun, Mar 29, 2020 at 04:12:58PM +0200, Ard Biesheuvel wrote: > When CONFIG_DEBUG_ALIGN_RODATA is enabled, kernel segments mapped with > different permissions (r-x for .text, r-- for .rodata, rw- for .data, > etc) are rounded up to 2 MiB so they can be mapped more efficiently. > In particular, it permits the segments to be mapped using level 2 > block entries when using 4k pages, which is expected to result in less > TLB pressure. > > However, the mappings for the bulk of the kernel will use level 2 > entries anyway, and the misaligned fringes are organized such that they > can take advantage of the contiguous bit, and use far fewer level 3 > entries than would be needed otherwise. > > This makes the value of this feature dubious at best, and since it is not > enabled in defconfig or in the distro configs, it does not appear to be > in wide use either. So let's just remove it. > > Signed-off-by: Ard Biesheuvel <ardb@xxxxxxxxxx> > --- > arch/arm64/Kconfig.debug | 13 ------------- > arch/arm64/include/asm/memory.h | 12 +----------- > drivers/firmware/efi/libstub/arm64-stub.c | 8 +++----- > 3 files changed, 4 insertions(+), 29 deletions(-) Acked-by: Will Deacon <will@xxxxxxxxxx> But I would really like to go a step further and rip out the block mapping support altogether so that we can fix non-coherent DMA aliases: https://lore.kernel.org/lkml/20200224194446.690816-1-hch@xxxxxx Will