On Fri, Jun 15, 2018 at 11:57:33AM -0400, Pavel Tatashin wrote: > The role of zero_resv_unavail() is to make sure that every struct page that > is allocated but is not backed by memory that is accessible by kernel is > zeroed and not in some uninitialized state. > > Since struct pages are allocated in blocks (2M pages in x86 case), we can > skip pageblock_nr_pages at a time, when the first one is found to be > invalid. > > This optimization may help since now on x86 every hole in e820 maps > is marked as reserved in memblock, and thus will go through this function. > > This function is called before sched_clock() is initialized, so I used my > x86 early boot clock patches to measure the performance improvement. > > With 1T hole on i7-8700 currently we would take 0.606918s of boot time, but > with this optimization 0.001103s. > > Signed-off-by: Pavel Tatashin <pasha.tatashin@xxxxxxxxxx> > --- > mm/page_alloc.c | 5 ++++- > 1 file changed, 4 insertions(+), 1 deletion(-) > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > index 1521100f1e63..94f1b3201735 100644 > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -6404,8 +6404,11 @@ void __paginginit zero_resv_unavail(void) > pgcnt = 0; > for_each_resv_unavail_range(i, &start, &end) { > for (pfn = PFN_DOWN(start); pfn < PFN_UP(end); pfn++) { > - if (!pfn_valid(ALIGN_DOWN(pfn, pageblock_nr_pages))) > + if (!pfn_valid(ALIGN_DOWN(pfn, pageblock_nr_pages))) { > + pfn = ALIGN_DOWN(pfn, pageblock_nr_pages) > + + pageblock_nr_pages - 1; > continue; > + } > mm_zero_struct_page(pfn_to_page(pfn)); > pgcnt++; > } Hi Pavel, Thanks for the patch. This looks good to me. Reviewed-by: Oscar Salvador <osalvador@xxxxxxx> > -- > 2.17.1 > Best Regards Oscar Salvador