On Mon, Feb 15, 2021 at 05:10:38PM +0100, Jesper Dangaard Brouer wrote: > > On Mon, 15 Feb 2021 12:00:56 +0000 > Mel Gorman <mgorman@xxxxxxxxxxxxxxxxxxx> wrote: > > > On Thu, Feb 11, 2021 at 01:26:28PM +0100, Jesper Dangaard Brouer wrote: > [...] > > > > > I also suggest the API can return less pages than requested. Because I > > > want to to "exit"/return if it need to go into an expensive code path > > > (like buddy allocator or compaction). I'm assuming we have a flags to > > > give us this behavior (via gfp_flags or alloc_flags)? > > > > > > > The API returns the number of pages returned on a list so policies > > around how aggressive it should be allocating the requested number of > > pages could be adjusted without changing the API. Passing in policy > > requests via gfp_flags may be problematic as most (all?) bits are > > already used. > > Well, I was just thinking that I would use GFP_ATOMIC instead of > GFP_KERNEL to "communicate" that I don't want this call to take too > long (like sleeping). I'm not requesting any fancy policy :-) > The NFS use case requires opposite semantics -- it really needs those allocations to succeed https://lore.kernel.org/r/161340498400.7780.962495219428962117.stgit@xxxxxxxxxxxxxxxxxxxxx. I've asked what code it's based on as it's not 5.11 and I'll iron that out first. Then it might be clearer what the "can fail" semantics should look like. I think it would be best to have pairs of patches where the first patch adjusts the semantics of the bulk allocator and the second adds a user. That will limit the amount of code code carried in the implementation. When the initial users are in place then the implementation can be optimised as the optimisations will require significant refactoring and I not want to refactor multiple times. -- Mel Gorman SUSE Labs