On Fri, Feb 07, 2025 at 12:39:57PM -1000, Tejun Heo wrote: > Hello, > > On Fri, Feb 07, 2025 at 09:40:53PM +0100, Andrea Righi wrote: > > +/** > > + * scx_bpf_cpu_to_node - Return the NUMA node the given @cpu belongs to > > + */ > > +__bpf_kfunc int scx_bpf_cpu_to_node(s32 cpu) > > Maybe scx_bpf_cpu_node() to be in line with scx_bpf_task_cpu/cgroup()? Ok, then maybe we can have scx_bpf_cpu_node() for the kfunc, that wraps scx_cpu_node() for internal use. > > > +{ > > +#ifdef CONFIG_NUMA > > + if (cpu < 0 || cpu >= nr_cpu_ids) > > + return -EINVAL; > > Use ops_cpu_valid()? Otherwise, we can end up calling cpu_to_node() with an > impossible CPU. Also, I don't think CPU -> node mapping function should be > able to return an error value. It should just trigger ops error. Ok. > > > + > > + return idle_cpu_to_node(cpu); > > This is contingent on scx_builtin_idle_per_node, right? It's confusing for > CPU -> node mapping function to return NUMA_NO_NODE depending on an ops > flag. Shouldn't this be a generic mapping function? The idea is that BPF schedulers can use this kfunc to determine the right idle cpumask to use, for example a typical usage could be: int node = scx_bpf_cpu_node(prev_cpu); s32 cpu = scx_bpf_pick_idle_cpu_in_node(p->cpus_ptr, node, SCX_PICK_IDLE_IN_NODE); Or: int node = scx_bpf_cpu_node(prev_cpu); const struct cpumask *idle_cpumask = scx_bpf_get_idle_cpumask_node(node); When SCX_OPS_BUILTIN_IDLE_PER_NODE is disabled, we need to point to the global idle cpumask, that is identified by NUMA_NO_NODE, so this is why we can return NUMA_NO_NODE fro scx_bpf_cpu_node(). Do you think we should make this more clear / document this better. Or do you think we should use a different API? > > > index 50e1499ae0935..caa1a80f9a60c 100644 > > --- a/tools/sched_ext/include/scx/compat.bpf.h > > +++ b/tools/sched_ext/include/scx/compat.bpf.h > > @@ -130,6 +130,25 @@ bool scx_bpf_dispatch_vtime_from_dsq___compat(struct bpf_iter_scx_dsq *it__iter, > > scx_bpf_now() : \ > > bpf_ktime_get_ns()) > > > > +#define __COMPAT_scx_bpf_cpu_to_node(cpu) \ > > + (bpf_ksym_exists(scx_bpf_cpu_to_node) ? \ > > + scx_bpf_cpu_to_node(cpu) : 0) > > + > > +#define __COMPAT_scx_bpf_get_idle_cpumask_node(node) \ > > + (bpf_ksym_exists(scx_bpf_get_idle_cpumask_node) ? \ > > + scx_bpf_get_idle_cpumask_node(node) : \ > > + scx_bpf_get_idle_cpumask()) \ > > + > > +#define __COMPAT_scx_bpf_get_idle_smtmask_node(node) \ > > + (bpf_ksym_exists(scx_bpf_get_idle_smtmask_node) ? \ > > + scx_bpf_get_idle_smtmask_node(node) : \ > > + scx_bpf_get_idle_smtmask()) > > + > > +#define __COMPAT_scx_bpf_pick_idle_cpu_node(cpus_allowed, node, flags) \ > > + (bpf_ksym_exists(scx_bpf_pick_idle_cpu_node) ? \ > > + scx_bpf_pick_idle_cpu_node(cpus_allowed, node, flags) : \ > > + scx_bpf_pick_idle_cpu(cpus_allowed, flags)) > > Can you please document when these compat macros can be dropped? Also, > shouldn't it also provide a compat macro for the new ops flag using > __COMPAT_ENUM_OR_ZERO()? Otherwise, trying to load new binary using the new > flag on an older kernel will fail, right? Right. Will add that. Thanks, -Andrea