On Fri, Nov 09, 2018 at 12:48:06PM -0800, Andrew Morton wrote: >On Thu, 8 Nov 2018 09:12:04 +0800 Wei Yang <richard.weiyang@xxxxxxxxx> wrote: > >> for_each_zone_zonelist() iterates the zonelist one by one, which means >> it will iterate on zones on the same node. While get_partial_node() >> checks available slab on node base instead of zone. >> >> This patch skip a node in case get_partial_node() fails to acquire slab >> on that node. > >This is rather hard to follow. > >I *think* the patch is a performance optimization: prevent >get_any_partial() from checking a node which get_partial_node() has >already looked at? You are right :-) > >Could we please have a more complete changelog? Hmm... I would like to. But I am not sure which part makes you hard to follow. If you would like to tell me the pain point, I am glad to think about how to make it more obvious. > >> --- a/mm/slub.c >> +++ b/mm/slub.c >> @@ -1873,7 +1873,7 @@ static void *get_partial_node(struct kmem_cache *s, struct kmem_cache_node *n, >> * Get a page from somewhere. Search in increasing NUMA distances. >> */ >> static void *get_any_partial(struct kmem_cache *s, gfp_t flags, >> - struct kmem_cache_cpu *c) >> + struct kmem_cache_cpu *c, int except) >> { >> #ifdef CONFIG_NUMA >> struct zonelist *zonelist; >> @@ -1882,6 +1882,9 @@ static void *get_any_partial(struct kmem_cache *s, gfp_t flags, >> enum zone_type high_zoneidx = gfp_zone(flags); >> void *object; >> unsigned int cpuset_mems_cookie; >> + nodemask_t nmask = node_states[N_MEMORY]; >> + >> + node_clear(except, nmask); > >And please add a comment describing what's happening here and why it is >done. Adding a sentence to the block comment over get_any_partial() >would be suitable. > Sure, I would address this in next spin. -- Wei Yang Help you, Help me