On 2/24/22 05:57, Muchun Song wrote: > On Thu, Feb 24, 2022 at 3:48 AM Joao Martins <joao.m.martins@xxxxxxxxxx> wrote: >> --- a/mm/page_alloc.c >> +++ b/mm/page_alloc.c >> @@ -6653,6 +6653,20 @@ static void __ref __init_zone_device_page(struct page *page, unsigned long pfn, >> } >> } >> >> +/* >> + * With compound page geometry and when struct pages are stored in ram most >> + * tail pages are reused. Consequently, the amount of unique struct pages to >> + * initialize is a lot smaller that the total amount of struct pages being >> + * mapped. This is a paired / mild layering violation with explicit knowledge >> + * of how the sparse_vmemmap internals handle compound pages in the lack >> + * of an altmap. See vmemmap_populate_compound_pages(). >> + */ >> +static inline unsigned long compound_nr_pages(struct vmem_altmap *altmap, >> + unsigned long nr_pages) >> +{ >> + return !altmap ? 2 * (PAGE_SIZE/sizeof(struct page)) : nr_pages; >> +} >> + > > Should be: > > return is_power_of_2(sizeof(struct page)) && > !altmap ? 2 * (PAGE_SIZE/sizeof(struct page)) : nr_pages; > I only half-address your previous comment. I've fixed it now.