This series makes sure that ioremap_page_range()'s input virtual address alignment is checked along with physical address before creating huge page kernel mappings to avoid problems related to random freeing of PMD or PTE pgtable pages potentially with existing valid entries. It also cleans up arm64 pgtable page address offset in [pud|pmd]_free_[pmd|pte]_page(). Changes in V3: - Added virtual address alignment check in ioremap_page_range() - Dropped VM_WARN_ONCE() as input virtual addresses are aligned for sure Changes in V2: (https://patchwork.kernel.org/patch/10922795/) - Replaced WARN_ON_ONCE() with VM_WARN_ONCE() as per Catalin Changes in V1: (https://patchwork.kernel.org/patch/10921135/) Cc: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> Cc: Will Deacon <will.deacon@xxxxxxx> Cc: Toshi Kani <toshi.kani@xxxxxxx> Cc: Thomas Gleixner <tglx@xxxxxxxxxxxxx> Cc: Catalin Marinas <catalin.marinas@xxxxxxx> Cc: Mark Rutland <mark.rutland@xxxxxxx> Cc: James Morse <james.morse@xxxxxxx> Cc: Chintan Pandya <cpandya@xxxxxxxxxxxxxx> Cc: Robin Murphy <robin.murphy@xxxxxxx> Cc: Laura Abbott <labbott@xxxxxxxxxx> Anshuman Khandual (2): mm/ioremap: Check virtual address alignment while creating huge mappings arm64/mm: Change offset base address in [pud|pmd]_free_[pmd|pte]_page() arch/arm64/mm/mmu.c | 6 +++--- lib/ioremap.c | 6 ++++++ 2 files changed, 9 insertions(+), 3 deletions(-) -- 2.20.1