Re: [RFC 2/2] mm/slub: prefer NUMA locality over slight memory saving on NUMA machines

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 7/23/23 21:09, Hyeonggon Yoo wrote:
> By default, SLUB sets remote_node_defrag_ratio to 1000, which makes it
> (in most cases) take slabs from remote nodes first before trying allocating
> new folios on the local node from buddy.
> 
> Documentation/ABI/testing/sysfs-kernel-slab says:
>> The file remote_node_defrag_ratio specifies the percentage of
>> times SLUB will attempt to refill the cpu slab with a partial
>> slab from a remote node as opposed to allocating a new slab on
>> the local node.  This reduces the amount of wasted memory over
>> the entire system but can be expensive.
> 
> Although this made sense when it was introduced, the portion of
> per node partial lists in the overall SLUB memory usage has been decreased
> since the introduction of per cpu partial lists. Therefore, it's worth
> reevaluating its overhead on performance and memory usage.
> 
> [
> 	XXX: Add performance data. I tried to measure its impact on
> 	hackbench with a 2 socket NUMA 	machine. but it seems hackbench is
> 	too synthetic to benefit from this, because the	skbuff_head_cache's
> 	size fits into the last level cache.
> 
> 	Probably more realistic workloads like netperf would benefit
> 	from this?
> ]
> 
> Set remote_node_defrag_ratio to zero by default, and the new behavior is:
> 	1) try refilling per CPU partial list from the local node
> 	2) try allocating new slabs from the local node without reclamation
> 	3) try refilling per CPU partial list from remote nodes
> 	4) try allocating new slabs from the local node or remote nodes
> 
> If user specified remote_node_defrag_ratio, it probabilistically tries
> 3) first and then try 2) and 4) in order, to avoid unexpected behavioral
> change from user's perspective.

It makes sense to me, but as you note it would be great to demonstrate
benefits, because it adds complexity, especially in the already complex
___slab_alloc(). Networking has been indeed historically a workload very
sensitive to slab performance, so seems a good candidate.

We could also postpone this until we have tried the percpu arrays
improvements discussed at LSF/MM.





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux