On 12/9/22 20:26, Kirill A. Shutemov wrote: >> > #ifdef CONFIG_DEFERRED_STRUCT_PAGE_INIT >> > /* >> > * Watermark failed for this zone, but see if we can >> > @@ -4299,6 +4411,9 @@ get_page_from_freelist(gfp_t gfp_mask, unsigned int order, int alloc_flags, >> > >> > return page; >> > } else { >> > + if (try_to_accept_memory(zone)) >> > + goto try_this_zone; >> >> On the other hand, here we failed the full rmqueue(), including the >> potentially fragmenting fallbacks, so I'm worried that before we finally >> fail all of that and resort to accepting more memory, we already fragmented >> the already accepted memory, more than necessary. > > I'm not sure I follow. We accept memory in pageblock chunks. Do we want to > allocate from a free pageblock if we have other memory to tap from? It > doesn't make sense to me. The fragmentation avoidance based on migratetype does work with pageblock granularity, so yeah, if you accept a single pageblock worth of memory and then (through __rmqueue_fallback()) end up serving both movable and unmovable allocations from it, the whole fragmentation avoidance mechanism is defeated and you end up with unmovable allocations (e.g. page tables) scattered over many pageblocks and inability to allocate any huge pages. >> So one way to prevent would be to move the acceptance into rmqueue() to >> happen before __rmqueue_fallback(), which I originally had in mind and maybe >> suggested that previously. > > I guess it should be pretty straight forward to fail __rmqueue_fallback() > if there's non-empty unaccepted_pages list and steer to > try_to_accept_memory() this way. That could be a way indeed. We do have ALLOC_NOFRAGMENT which could be possible to employ here. But maybe the zone_watermark_fast() modification would be simpler yet sufficient. It makes sense to me that we'd try to keep a high watermark worth of pre-accepted memory. zone_watermark_fast() would fail at low watermark, so we could try accepting (high-low) at a time instead of single pageblock. > But I still don't understand why. To avoid what I described above. >> But maybe less intrusive and more robust way would be to track how much >> memory is unaccepted, and actually decrement that amount from free memory >> in zone_watermark_fast() in order to force earlier failure of that check and >> thus to accept more memory and give us a buffer of truly accepted and >> available memory up to high watermark, which should hopefully prevent most >> of the fallbacks. Then the code I flagged above as currently unecessary >> would make perfect sense. > > The next patch adds per-node unaccepted memory accounting. We can move it > per-zone if it would help. Right. >> And maybe Mel will have some ideas as well. > > I don't have much expertise in page allocator. Any input is valuable. > >> > + >> > #ifdef CONFIG_DEFERRED_STRUCT_PAGE_INIT >> > /* Try again if zone has deferred pages */ >> > if (static_branch_unlikely(&deferred_pages)) { >> > @@ -6935,6 +7050,10 @@ static void __meminit zone_init_free_lists(struct zone *zone) >> > INIT_LIST_HEAD(&zone->free_area[order].free_list[t]); >> > zone->free_area[order].nr_free = 0; >> > } >> > + >> > +#ifdef CONFIG_UNACCEPTED_MEMORY >> > + INIT_LIST_HEAD(&zone->unaccepted_pages); >> > +#endif >> > } >> > >> > /* >> >