On 7/17/20 10:10 AM, Vlastimil Babka wrote: > On 7/17/20 9:29 AM, Joonsoo Kim wrote: >> 2020년 7월 16일 (목) 오후 4:45, Vlastimil Babka <vbabka@xxxxxxx>님이 작성: >>> >>> On 7/16/20 9:27 AM, Joonsoo Kim wrote: >>> > 2020년 7월 15일 (수) 오후 5:24, Vlastimil Babka <vbabka@xxxxxxx>님이 작성: >>> >> > /* >>> >> > * get_page_from_freelist goes through the zonelist trying to allocate >>> >> > * a page. >>> >> > @@ -3706,6 +3714,8 @@ get_page_from_freelist(gfp_t gfp_mask, unsigned int order, int alloc_flags, >>> >> > struct pglist_data *last_pgdat_dirty_limit = NULL; >>> >> > bool no_fallback; >>> >> > >>> >> > + current_alloc_flags(gfp_mask, &alloc_flags); >>> >> >>> >> I don't see why to move the test here? It will still be executed in the >>> >> fastpath, if that's what you wanted to avoid. >>> > >>> > I want to execute it on the fastpath, too. Reason that I moved it here >>> > is that alloc_flags could be reset on slowpath. See the code where >>> > __gfp_pfmemalloc_flags() is on. This is the only place that I can apply >>> > this option to all the allocation paths at once. >>> >>> But get_page_from_freelist() might be called multiple times in the slowpath, and >>> also anyone looking for gfp and alloc flags setup will likely not examine this >>> function. I don't see a problem in having it in two places that already deal >>> with alloc_flags setup, as it is now. >> >> I agree that anyone looking alloc flags will miss that function easily. Okay. >> I will place it on its original place, although we now need to add one >> more place. >> *Three places* are gfp_to_alloc_flags(), prepare_alloc_pages() and >> __gfp_pfmemalloc_flags(). > > Hm the check below should also work for ALLOC_OOM|ALLOC_NOCMA then. > > /* Avoid allocations with no watermarks from looping endlessly */ > if (tsk_is_oom_victim(current) && > (alloc_flags == ALLOC_OOM || > (gfp_mask & __GFP_NOMEMALLOC))) > goto nopage; > > Maybe it's simpler to change get_page_from_freelist() then. But document well. But then we have e.g. should_reclaim_retry() which calls __zone_watermark_ok() where ALLOC_CMA plays a role too, so that means we should have alloc_mask set up correctly wrt ALLOC_CMA at the __alloc_pages_slowpath() level... >> Thanks. >> >