On 10/4/24 11:25 PM, Roman Gushchin wrote: > On Fri, Oct 04, 2024 at 01:10:58PM -0700, Song Liu wrote: >> On Wed, Oct 2, 2024 at 11:10 AM Namhyung Kim <namhyung@xxxxxxxxxx> wrote: >>> >>> The bpf_get_kmem_cache() is to get a slab cache information from a >>> virtual address like virt_to_cache(). If the address is a pointer >>> to a slab object, it'd return a valid kmem_cache pointer, otherwise >>> NULL is returned. >>> >>> It doesn't grab a reference count of the kmem_cache so the caller is >>> responsible to manage the access. The intended use case for now is to >>> symbolize locks in slab objects from the lock contention tracepoints. >>> >>> Suggested-by: Vlastimil Babka <vbabka@xxxxxxx> >>> Acked-by: Roman Gushchin <roman.gushchin@xxxxxxxxx> (mm/*) >>> Acked-by: Vlastimil Babka <vbabka@xxxxxxx> #mm/slab >>> Signed-off-by: Namhyung Kim <namhyung@xxxxxxxxxx> So IIRC from our discussions with Namhyung and Arnaldo at LSF/MM I thought the perf use case was: - at the beginning it iterates the kmem caches and stores anything of possible interest in bpf maps or somewhere - hence we have the iterator - during profiling, from object it gets to a cache, but doesn't need to access the cache - just store the kmem_cache address in the perf record - after profiling itself, use the information in the maps from the first step together with cache pointers from the second step to calculate whatever is necessary So at no point it should be necessary to take refcount to a kmem_cache? But maybe "bpf_get_kmem_cache()" is implemented here as too generic given the above use case and it should be implemented in a way that the pointer it returns cannot be used to access anything (which could be unsafe), but only as a bpf map key - so it should return e.g. an unsigned long instead? >>> --- >>> kernel/bpf/helpers.c | 1 + >>> mm/slab_common.c | 19 +++++++++++++++++++ >>> 2 files changed, 20 insertions(+) >>> >>> diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c >>> index 4053f279ed4cc7ab..3709fb14288105c6 100644 >>> --- a/kernel/bpf/helpers.c >>> +++ b/kernel/bpf/helpers.c >>> @@ -3090,6 +3090,7 @@ BTF_ID_FLAGS(func, bpf_iter_bits_new, KF_ITER_NEW) >>> BTF_ID_FLAGS(func, bpf_iter_bits_next, KF_ITER_NEXT | KF_RET_NULL) >>> BTF_ID_FLAGS(func, bpf_iter_bits_destroy, KF_ITER_DESTROY) >>> BTF_ID_FLAGS(func, bpf_copy_from_user_str, KF_SLEEPABLE) >>> +BTF_ID_FLAGS(func, bpf_get_kmem_cache, KF_RET_NULL) >>> BTF_KFUNCS_END(common_btf_ids) >>> >>> static const struct btf_kfunc_id_set common_kfunc_set = { >>> diff --git a/mm/slab_common.c b/mm/slab_common.c >>> index 7443244656150325..5484e1cd812f698e 100644 >>> --- a/mm/slab_common.c >>> +++ b/mm/slab_common.c >>> @@ -1322,6 +1322,25 @@ size_t ksize(const void *objp) >>> } >>> EXPORT_SYMBOL(ksize); >>> >>> +#ifdef CONFIG_BPF_SYSCALL >>> +#include <linux/btf.h> >>> + >>> +__bpf_kfunc_start_defs(); >>> + >>> +__bpf_kfunc struct kmem_cache *bpf_get_kmem_cache(u64 addr) >>> +{ >>> + struct slab *slab; >>> + >>> + if (!virt_addr_valid(addr)) >>> + return NULL; >>> + >>> + slab = virt_to_slab((void *)(long)addr); >>> + return slab ? slab->slab_cache : NULL; >>> +} >> >> Do we need to hold a refcount to the slab_cache? Given >> we make this kfunc available everywhere, including >> sleepable contexts, I think it is necessary. > > It's a really good question. > > If the callee somehow owns the slab object, as in the example > provided in the series (current task), it's not necessarily. > > If a user can pass a random address, you're right, we need to > grab the slab_cache's refcnt. But then we also can't guarantee > that the object still belongs to the same slab_cache, the > function becomes racy by the definition.