On 11/10/22 22:21, Michael Kelley wrote: > --- a/arch/x86/mm/ioremap.c > +++ b/arch/x86/mm/ioremap.c > @@ -218,7 +218,7 @@ static void __ioremap_check_mem(resource_size_t addr, unsigned long size, > */ > offset = phys_addr & ~PAGE_MASK; > phys_addr &= PHYSICAL_PAGE_MASK; > - size = PAGE_ALIGN(last_addr+1) - phys_addr; > + size = (PAGE_ALIGN(last_addr+1) & PHYSICAL_PAGE_MASK) - phys_addr; Michael, thanks for the explanation in your other reply. First and foremost, I *totally* missed the reason for this patch. I was thinking about issues that could pop up from the _lower_ bits being masked off. Granted, your changelog _did_ say "upper bits", so shame on me. But, it would be great to put some more background in the changelog to make it a bit harder for silly reviewers to miss such things. I'd also like to propose something that I think is more straightforward: /* * Mappings have to be page-aligned */ offset = phys_addr & ~PAGE_MASK; phys_addr &= PAGE_MASK; size = PAGE_ALIGN(last_addr+1) - phys_addr; /* * Mask out any bits not parts of the actual physical * address, like memory encryption bits. */ phys_addr &= PHYSICAL_PAGE_MASK; Because, first of all, that "Mappings have to be page-aligned" thing is (now) doing more than page-aligning things. Second, the moment you mask out the metadata bits, the 'size' calculation gets harder. Doing it in two phases (page alignment followed by metadata bit masking) breaks up the two logical operations.