On Fri, 2018-09-14 at 14:36 -0600, Toshi Kani wrote: > On Wed, 2018-09-12 at 11:26 +0100, Will Deacon wrote: > > The recently merged API for ensuring break-before-make on page-table > > entries when installing huge mappings in the vmalloc/ioremap region is > > fairly counter-intuitive, resulting in the arch freeing functions > > (e.g. pmd_free_pte_page()) being called even on entries that aren't > > present. This resulted in a minor bug in the arm64 implementation, giving > > rise to spurious VM_WARN messages. > > > > This patch moves the pXd_present() checks out into the core code, > > refactoring the callsites at the same time so that we avoid the complex > > conjunctions when determining whether or not we can put down a huge > > mapping. > > > > Cc: Chintan Pandya <cpandya@xxxxxxxxxxxxxx> > > Cc: Toshi Kani <toshi.kani@xxxxxxx> > > Cc: Thomas Gleixner <tglx@xxxxxxxxxxxxx> > > Cc: Michal Hocko <mhocko@xxxxxxxx> > > Cc: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> > > Suggested-by: Linus Torvalds <torvalds@xxxxxxxxxxxxxxxxxxxx> > > Signed-off-by: Will Deacon <will.deacon@xxxxxxx> > > Yes, this looks nicer. > > Reviewed-by: Toshi Kani <toshi.kani@xxxxxxx> Sorry, I take it back since I got a question... +static int ioremap_try_huge_pmd(pmd_t *pmd, unsigned long addr, > + unsigned long end, phys_addr_t phys_addr, > + pgprot_t prot) > +{ > + if (!ioremap_pmd_enabled()) > + return 0; > + > + if ((end - addr) != PMD_SIZE) > + return 0; > + > + if (!IS_ALIGNED(phys_addr, PMD_SIZE)) > + return 0; > + > + if (pmd_present(*pmd) && !pmd_free_pte_page(pmd, addr)) > + return 0; Is pm_present() a proper check here? We probably do not have this case for iomap, but I wonder if one can drop p-bit while it has a pte page underneath. Thanks, -Toshi > + > + return pmd_set_huge(pmd, phys_addr, prot); > +} > +