On Tue 03-09-24 19:53:41, Kent Overstreet wrote: [...] > However, if we agreed that GFP_NOFAIL meant "only fail if it is not > possible to satisfy this allocation" (and I have been arguing that that > is the only sane meaning) - then that could lead to a lot of error paths > getting simpler. > > Because there are a lot of places where there's essentially no good > reason to bubble up an -ENOMEM to userspace; if we're actually out of > memory the current allocation is just one out of many and not > particularly special, better to let the oom killer handle it... This is exactly GFP_KERNEL semantic for low order allocations or kvmalloc for that matter. They simply never fail unless couple of corner cases - e.g. the allocating task is an oom victim and all of the oom memory reserves have been consumed. This is where we call "not possible to allocate". > So the error paths would be more along the lines of "there's a bug, or > userspace has requested something crazy, just shut down gracefully". How do you expect that to be done? Who is going to go over all those GFP_NOFAIL users? And what kind of guide lines should they follow? It is clear that they believe they cannot handle the failure gracefully therefore they have requested GFP_NOFAIL. Many of them do not have return value to return. So really what do you expect proper GFP_NOFAIL users to do and what should happen to those that are requesting unsupported size or allocation mode? > While we're at it, the definition of what allocation size is "too big" > is something we'd want to look at. Right now it's hardcoded to INT_MAX > for non GFP_NOFAIL and (I believe) 2 pages for GFP_NOFAL, we might want > to consider doing something based on total memory in the machine and > have the same limit apply to both... Yes, we need to define some reasonable maximum supported sizes. For the page allocator this has been order > 1 and we considering we have a warning about those requests for years without a single report then we can assume we do not have such abusers. for kvmalloc to story is different. Current INT_MAX is just not any practical limit. Past experience says that anything based on the amount of memory just doesn't work (e.g. hash table sizes that used to that scaling and there are other examples). So we should be practical here and look at existing users and see what they really need and put a cap above that. -- Michal Hocko SUSE Labs