On Thu, Feb 24, 2022 at 3:48 AM Joao Martins <joao.m.martins@xxxxxxxxxx> wrote: > > Currently memmap_init_zone_device() ends up initializing 32768 pages > when it only needs to initialize 128 given tail page reuse. That > number is worse with 1GB compound pages, 262144 instead of 128. Update > memmap_init_zone_device() to skip redundant initialization, detailed > below. > > When a pgmap @vmemmap_shift is set, all pages are mapped at a given > huge page alignment and use compound pages to describe them as opposed > to a struct per 4K. > > With @vmemmap_shift > 0 and when struct pages are stored in ram > (!altmap) most tail pages are reused. Consequently, the amount of > unique struct pages is a lot smaller that the total amount of struct s/that/than/g