On 7/12/22 15:39, Hyeonggon Yoo wrote: > This is v3 of common kmalloc series. > > This series generalize most of kmalloc functions and move its > implementation into mm/slab_common.c. > > I believe this series give better maintainability of code for SLAB and SLUB. > Please consider applying. > > This series is based on slab/for-next and also available at > https://github.com/hygoni/linux/tree/slab-common-v3r0 > > Any feedbacks/reviews will be appreciated. Hi, thanks for all your efforts. It's shaping up nicely so I think the next version will be ready to be added to -next after the 5.20 merge window. As I've finished the individual reviews, I'm looking at the result and see a bit more potential for cleanups, which could be perhaps incorporated to existing patches, or additionally: - kmalloc_large_node_notrace() has only one caller, can be removed and the caller can call __kmalloc_large_node_notrace() directly, especially if it's not __always_inline as I've IIRC suggested. - kmem_cache_alloc_trace() and kmem_cache_alloc_node_trace() are weird ones, they are in fact for kmalloc despite the name. They depend on CONFIG_TRACING, yet if you look carefully, the !CONFIG_TRACING variant also goes through a trace_* function. The actual difference seems that slab_alloc() thus kasan_alloc() and kfence_alloc() don't get the orig_size that way, which is dubious. It might be worth trying to unify this as well? E.g. - use only the CONFIG_TRACING variant, discard the other - declare it in mm/slab.h, this is not for general usage - single implementation in mm/slab_common.c that calls __kmem_cache_alloc_node() from SLAB/SLUB and does the trace