On Thu, 2024-02-29 at 22:09 -0500, Kent Overstreet wrote: > On Fri, Mar 01, 2024 at 02:48:52AM +0000, Matthew Wilcox wrote: > > On Thu, Feb 29, 2024 at 09:39:17PM -0500, Kent Overstreet wrote: > > > On Fri, Mar 01, 2024 at 01:16:18PM +1100, NeilBrown wrote: > > > > Insisting that GFP_KERNEL allocations never returned NULL would > > > > allow us to remove a lot of untested error handling code.... > > > > > > If memcg ever gets enabled for all kernel side allocations we > > > might start seeing failures of GFP_KERNEL allocations. > > > > Why would we want that behaviour? A memcg-limited allocation > > should behave like any other allocation -- block until we've freed > > some other memory in this cgroup, either by swap or killing or ... > > It's not uncommon to have a more efficient way of doing something if > you can allocate more memory, but still have the ability to run in a > more bounded amount of space if you need to; I write code like this > quite often. The cgroup design is to do what we do usually, but within settable hard and soft limits. So if the kernel could make GFP_KERNEL wait without failing, the cgroup would mirror that. > Or maybe you just want the syscall to return an error instead of > blocking for an unbounded amount of time if userspace asks for > something silly. Warn on allocation above a certain size without MAY_FAIL would seem to cover all those cases. If there is a case for requiring instant allocation, you always have GFP_ATOMIC, and, I suppose, we could even do a bounded reclaim allocation where it tries for a certain time then fails. > Honestly, relying on the OOM killer and saying that because that now > we don't have to write and test your error paths is a lazy cop out. OOM Killer is the most extreme outcome. Usually reclaim (hugely simplified) dumps clean cache first and tries the shrinkers then tries to write out dirty cache. Only after that hasn't found anything after a few iterations will the oom killer get activated. > The same kind of thinking got us overcommit, where yes we got an > increase in efficiency, but the cost was that everyone started > assuming and relying on overcommit, so now it's impossible to run > without overcommit enabled except in highly controlled environments. That might be true for your use case, but it certainly isn't true for a cheap hosting cloud using containers: overcommit is where you make your money, so it's absolutely standard operating procedure. I wouldn't call cheap hosting a "highly controlled environment" they're just making a bet they won't get caught out too often. > And that means allocation failure as an effective signal is just > completely busted in userspace. If you want to write code in > userspace that uses as much memory as is available and no more, you > _can't_, because system behaviour goes to shit if you have overcommit > enabled or a bunch of memory gets wasted if overcommit is disabled > because everyone assumes that's just what you do. OK, this seems to be specific to your use case again, because if you look at what the major user space processes like web browsers do, they allocate way over the physical memory available to them for cache and assume the kernel will take care of it. Making failure a signal for being over the working set would cause all these applications to segfault almost immediately. I think what you're asking for is an API to try to calculate what the current available headroom in the working set would be? That's highly heuristic, but the mm people might have an idea how to do it. > Let's _not_ go that route in the kernel. I have pointy sticks to > brandish at people who don't want to deal with properly handling > errors. Error legs are the least exercised and most bug, and therefore exploit, prone pieces of code in C. If we can get rid of them, we should. James