On Wed 03-03-21 09:22:50, Ben Widawsky wrote: > On 21-03-03 18:14:30, Michal Hocko wrote: > > On Wed 03-03-21 08:31:41, Ben Widawsky wrote: > > > On 21-03-03 14:59:35, Michal Hocko wrote: > > > > On Wed 03-03-21 21:46:44, Feng Tang wrote: > > > > > On Wed, Mar 03, 2021 at 09:18:32PM +0800, Tang, Feng wrote: > > > > > > On Wed, Mar 03, 2021 at 01:32:11PM +0100, Michal Hocko wrote: > > > > > > > On Wed 03-03-21 20:18:33, Feng Tang wrote: > > > > [...] > > > > > > > > One thing I tried which can fix the slowness is: > > > > > > > > > > > > > > > > + gfp_mask &= ~(__GFP_DIRECT_RECLAIM | __GFP_KSWAPD_RECLAIM); > > > > > > > > > > > > > > > > which explicitly clears the 2 kinds of reclaim. And I thought it's too > > > > > > > > hacky and didn't mention it in the commit log. > > > > > > > > > > > > > > Clearing __GFP_DIRECT_RECLAIM would be the right way to achieve > > > > > > > GFP_NOWAIT semantic. Why would you want to exclude kswapd as well? > > > > > > > > > > > > When I tried gfp_mask &= ~__GFP_DIRECT_RECLAIM, the slowness couldn't > > > > > > be fixed. > > > > > > > > > > I just double checked by rerun the test, 'gfp_mask &= ~__GFP_DIRECT_RECLAIM' > > > > > can also accelerate the allocation much! though is still a little slower than > > > > > this patch. Seems I've messed some of the tries, and sorry for the confusion! > > > > > > > > > > Could this be used as the solution? or the adding another fallback_nodemask way? > > > > > but the latter will change the current API quite a bit. > > > > > > > > I haven't got to the whole series yet. The real question is whether the > > > > first attempt to enforce the preferred mask is a general win. I would > > > > argue that it resembles the existing single node preferred memory policy > > > > because that one doesn't push heavily on the preferred node either. So > > > > dropping just the direct reclaim mode makes some sense to me. > > > > > > > > IIRC this is something I was recommending in an early proposal of the > > > > feature. > > > > > > My assumption [FWIW] is that the usecases we've outlined for multi-preferred > > > would want more heavy pushing on the preference mask. However, maybe the uapi > > > could dictate how hard to try/not try. > > > > What does that mean and what is the expectation from the kernel to be > > more or less cast in stone? > > > > (I'm not positive I've understood your question, so correct me if I > misunderstood) > > I'm not sure there is a stone-cast way to define it nor should we. OK, I thought you want the behavior to diverge from the existing MPOL_PREFERRED which only prefers the configured node as a default but the allocator is free to fallback to any other node under memory pressure. For the multiple preferred nodes the same should be applied and only attempt lightweight attempt before falling back to full nodeset. Your paragraph I was replying to is not in line with this though. > At the very > least though, something in uapi that has a general mapping to GFP flags > (specifically around reclaim) for the first round of allocation could make > sense. I do not think this is a good idea. > In my head there are 3 levels of request possible for multiple nodes: > 1. BIND: Those nodes or die. > 2. Preferred hard: Those nodes and I'm willing to wait. Fallback if impossible. > 3. Preferred soft: Those nodes but I don't want to wait. I do agree that an intermediate "preference" can be helpful because binding is just too strict and OOM semantic is far from ideal. But this would need a new policy. > Current UAPI in the series doesn't define a distinction between 2, and 3. As I > understand the change, Feng is defining the behavior to be #3, which makes #2 > not an option. I sort of punted on defining it entirely, in the beginning. I really think it should be in line with the existing preferred policy behavior. -- Michal Hocko SUSE Labs