On 27 January 2016 at 07:54, <gregkh@xxxxxxxxxxxxxxxxxxx> wrote: > > This is a note to let you know that I've just added the patch titled > > arm64: mm: use correct mapping granularity under DEBUG_RODATA > > to the 4.1-stable tree which can be found at: > http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary > > The filename of the patch is: > arm64-mm-use-correct-mapping-granularity-under-debug_rodata.patch > and it can be found in the queue-4.1 subdirectory. > > If you, or anyone else, feels it should not be added to the stable tree, > please let <stable@xxxxxxxxxxxxxxx> know about it. > Apologies for the late notice: as mentioned in the other thread, this will fail to build due to a missing #define of SWAPPER_BLOCK_SIZE. I will submit a new version specific to -stable (which just adds the #define locally) Thanks, Ard. > > From 4fee9f364b9b99f76732f2a6fd6df679a237fa74 Mon Sep 17 00:00:00 2001 > From: Ard Biesheuvel <ard.biesheuvel@xxxxxxxxxx> > Date: Mon, 16 Nov 2015 11:18:14 +0100 > Subject: arm64: mm: use correct mapping granularity under DEBUG_RODATA > > From: Ard Biesheuvel <ard.biesheuvel@xxxxxxxxxx> > > commit 4fee9f364b9b99f76732f2a6fd6df679a237fa74 upstream. > > When booting a 64k pages kernel that is built with CONFIG_DEBUG_RODATA > and resides at an offset that is not a multiple of 512 MB, the rounding > that occurs in __map_memblock() and fixup_executable() results in > incorrect regions being mapped. > > The following snippet from /sys/kernel/debug/kernel_page_tables shows > how, when the kernel is loaded 2 MB above the base of DRAM at 0x40000000, > the first 2 MB of memory (which may be inaccessible from non-secure EL1 > or just reserved by the firmware) is inadvertently mapped into the end of > the module region. > > ---[ Modules start ]--- > 0xfffffdffffe00000-0xfffffe0000000000 2M RW NX ... UXN MEM/NORMAL > ---[ Modules end ]--- > ---[ Kernel Mapping ]--- > 0xfffffe0000000000-0xfffffe0000090000 576K RW NX ... UXN MEM/NORMAL > 0xfffffe0000090000-0xfffffe0000200000 1472K ro x ... UXN MEM/NORMAL > 0xfffffe0000200000-0xfffffe0000800000 6M ro x ... UXN MEM/NORMAL > 0xfffffe0000800000-0xfffffe0000810000 64K ro x ... UXN MEM/NORMAL > 0xfffffe0000810000-0xfffffe0000a00000 1984K RW NX ... UXN MEM/NORMAL > 0xfffffe0000a00000-0xfffffe00ffe00000 4084M RW NX ... UXN MEM/NORMAL > > The same issue is likely to occur on 16k pages kernels whose load > address is not a multiple of 32 MB (i.e., SECTION_SIZE). So round to > SWAPPER_BLOCK_SIZE instead of SECTION_SIZE. > > Fixes: da141706aea5 ("arm64: add better page protections to arm64") > Signed-off-by: Ard Biesheuvel <ard.biesheuvel@xxxxxxxxxx> > Acked-by: Mark Rutland <mark.rutland@xxxxxxx> > Acked-by: Laura Abbott <labbott@xxxxxxxxxx> > Signed-off-by: Catalin Marinas <catalin.marinas@xxxxxxx> > Signed-off-by: Greg Kroah-Hartman <gregkh@xxxxxxxxxxxxxxxxxxx> > > --- > arch/arm64/mm/mmu.c | 12 ++++++------ > 1 file changed, 6 insertions(+), 6 deletions(-) > > --- a/arch/arm64/mm/mmu.c > +++ b/arch/arm64/mm/mmu.c > @@ -307,8 +307,8 @@ static void __init __map_memblock(phys_a > * for now. This will get more fine grained later once all memory > * is mapped > */ > - unsigned long kernel_x_start = round_down(__pa(_stext), SECTION_SIZE); > - unsigned long kernel_x_end = round_up(__pa(__init_end), SECTION_SIZE); > + unsigned long kernel_x_start = round_down(__pa(_stext), SWAPPER_BLOCK_SIZE); > + unsigned long kernel_x_end = round_up(__pa(__init_end), SWAPPER_BLOCK_SIZE); > > if (end < kernel_x_start) { > create_mapping(start, __phys_to_virt(start), > @@ -396,18 +396,18 @@ void __init fixup_executable(void) > { > #ifdef CONFIG_DEBUG_RODATA > /* now that we are actually fully mapped, make the start/end more fine grained */ > - if (!IS_ALIGNED((unsigned long)_stext, SECTION_SIZE)) { > + if (!IS_ALIGNED((unsigned long)_stext, SWAPPER_BLOCK_SIZE)) { > unsigned long aligned_start = round_down(__pa(_stext), > - SECTION_SIZE); > + SWAPPER_BLOCK_SIZE); > > create_mapping(aligned_start, __phys_to_virt(aligned_start), > __pa(_stext) - aligned_start, > PAGE_KERNEL); > } > > - if (!IS_ALIGNED((unsigned long)__init_end, SECTION_SIZE)) { > + if (!IS_ALIGNED((unsigned long)__init_end, SWAPPER_BLOCK_SIZE)) { > unsigned long aligned_end = round_up(__pa(__init_end), > - SECTION_SIZE); > + SWAPPER_BLOCK_SIZE); > create_mapping(__pa(__init_end), (unsigned long)__init_end, > aligned_end - __pa(__init_end), > PAGE_KERNEL); > > > Patches currently in stable-queue which might be from ard.biesheuvel@xxxxxxxxxx are > > queue-4.1/arm-arm64-kvm-test-properly-for-a-pte-s-uncachedness.patch > queue-4.1/arm64-mm-use-correct-mapping-granularity-under-debug_rodata.patch > queue-4.1/arm-arm64-kvm-correct-pte-uncachedness-check.patch -- To unsubscribe from this list: send the line "unsubscribe stable" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html