On Thu, Aug 22, 2024 at 4:04 PM Gao Xiang <hsiangkao@xxxxxxxxxxxxxxxxx> wrote: > > Hi Michal, > > On 2024/8/22 15:54, Michal Hocko wrote: > > On Thu 22-08-24 15:01:43, Gao Xiang wrote: > >> In my opinion, I'm not sure how PAGE_ALLOC_COSTLY_ORDER restriction > >> means for a single shot. Because assume even if you don't consider > >> a virtual consecutive buffer, people could also do > >> < PAGE_ALLOC_COSTLY_ORDER allocations multiple times to get almost > >> the same heavy workload to the whole system. And we also allow > >> direct/kswap reclaim here. > > > > Quite honestly I do not think that PAGE_ALLOC_COSTLY_ORDER constrain > > make sense outside of the page allocator proper. There is no reason why > > vmalloc NOFAIL should be constrained by that. Sure it should be > > contrained to some value but considering it is just a bunch of PAGE_SIZE > > allocation then the limit could be higher. I am not sure where the > > practical limit should be but anything that requires more than couple of > > MBs seems really excessive. > > Yeah, totally agreed, that would make my own life easier, of > course I will not allocate MBs insanely. > > I've always trying to kill unnecessary NOFAILs (mostly together > with code cleanups), but if a failure path increases more than > 100 LOCs just for rare failure and extreme workloads, I _do_ > hope kvmalloc(NOFAIL) could work instead. If the LOCs in the error handler are a concern, I believe we can simplify it to a single line: while (!alloc()), which is essentially what NOFAIL does and is also the reason we want desperate NOFAIL. A better approach might involve failing after a maximum number of retries at the call site, for example: while (try < max_retries && !alloc()) At least that is better than the endless loop in the page allocator. -- Regards Yafang