On Thu, Aug 3, 2023 at 11:54 PM Vlastimil Babka <vbabka@xxxxxxx> wrote: > > On 7/23/23 21:09, Hyeonggon Yoo wrote: > > By default, SLUB sets remote_node_defrag_ratio to 1000, which makes it > > (in most cases) take slabs from remote nodes first before trying allocating > > new folios on the local node from buddy. > > > > Documentation/ABI/testing/sysfs-kernel-slab says: > >> The file remote_node_defrag_ratio specifies the percentage of > >> times SLUB will attempt to refill the cpu slab with a partial > >> slab from a remote node as opposed to allocating a new slab on > >> the local node. This reduces the amount of wasted memory over > >> the entire system but can be expensive. > > > > Although this made sense when it was introduced, the portion of > > per node partial lists in the overall SLUB memory usage has been decreased > > since the introduction of per cpu partial lists. Therefore, it's worth > > reevaluating its overhead on performance and memory usage. > > > > [ > > XXX: Add performance data. I tried to measure its impact on > > hackbench with a 2 socket NUMA machine. but it seems hackbench is > > too synthetic to benefit from this, because the skbuff_head_cache's > > size fits into the last level cache. > > > > Probably more realistic workloads like netperf would benefit > > from this? > > ] > > > > Set remote_node_defrag_ratio to zero by default, and the new behavior is: > > 1) try refilling per CPU partial list from the local node > > 2) try allocating new slabs from the local node without reclamation > > 3) try refilling per CPU partial list from remote nodes > > 4) try allocating new slabs from the local node or remote nodes > > > > If user specified remote_node_defrag_ratio, it probabilistically tries > > 3) first and then try 2) and 4) in order, to avoid unexpected behavioral > > change from user's perspective. > > It makes sense to me, but as you note it would be great to demonstrate > benefits, because it adds complexity, especially in the already complex > ___slab_alloc(). Networking has been indeed historically a workload very > sensitive to slab performance, so seems a good candidate. Thank you for looking at it! Yeah, it was a PoC for what I thought "oh, it might be useful" and definitely I will try to measure it. > We could also postpone this until we have tried the percpu arrays > improvements discussed at LSF/MM. Possibly, but can you please share your plans/opinions on it? I think one possible way is simply to allow the cpu freelist to be mixed by objects from different slabs if we want to minimize changes, Or introduce a per cpu array similar to what SLAB does now. And one thing I'm having difficulty understanding is - what is the mind behind/or impact of managing objects on a slab basis, other than avoiding array queues in 2007?