On 3/20/24 19:32, Dan Carpenter wrote: > On Tue, Mar 12, 2024 at 03:46:32PM +0100, Vlastimil Babka wrote: >> But if we change it to effectively mean GFP_NOFAIL (for non-costly >> allocations), there should be a manageable number of places to change to a >> variant that allows failure. > > What does that even mean if GFP_NOFAIL can fail for "costly" allocations? > I thought GFP_NOFAIL couldn't fail at all... Yeah, the suggestion was that GFP_KERNEL would act as GFP_NOFAIL but only for non-costly allocations. Anything marked GFP_NOFAIL would still be fully nofail. > Unfortunately, it's common that when we can't decide on a sane limit for > something people just say "let the user decide based on how much memory > they have". I have added some integer overflow checks which allow the > user to allocate up to UINT_MAX bytes so I know this code is out > there. We can't just s/GFP_KERNEL/GFP_NOFAIL/. Maybe we could start producing warnings for costly GFP_KERNEL allocations to get them converted away faster. Anything that's user-controlled most likely shouldn't be GFP_KERNEL. > From a static analysis perspective it would be nice if the callers > explicitly marked which allocations can fail and which can't. As I suggested, it would be nice not to wait until everything is explicitly marked one way or another. I get the comparison with BKL, but also the kernel got much larger since the BKL times? Good point that it's not ideal if the size is unknown. Maybe the tools could be used to point out places where the size cannot be determined, so those should be converted first? Also tools cound warn about attempts to handle failure to point out places where it should be removed? > regards, > dan carpenter >