On Tue, Aug 1, 2023 at 7:34 PM Yafang Shao <laoar.shao@xxxxxxxxx> wrote: > > > > > In kernel, we have a global variable > > nr_cpu_ids (also in kernel/bpf/helpers.c) > > which is used in numerous places for per cpu data struct access. > > > > I am wondering whether we could have bpf code like > > int nr_cpu_ids __ksym; > > > > struct bpf_iter_num it; > > int i = 0; > > > > // nr_cpu_ids is special, we can give it a range [1, CONFIG_NR_CPUS]. > > bpf_iter_num_new(&it, 1, nr_cpu_ids); > > while ((v = bpf_iter_num_next(&it))) { > > /* access cpu i data */ > > i++; > > } > > bpf_iter_num_destroy(&it); > > > > From all existing open coded iterator loops, looks like > > upper bound has to be a constant. We might need to extend support > > to bounded scalar upper bound if not there. > > Currently the upper bound is required by both the open-coded for-loop > and the bpf_loop. I think we can extend it. > > It can't handle the cpumask case either. > > for_each_cpu(cpu, mask) > > In the 'mask', the CPU IDs might not be continuous. In our container > environment, we always use the cpuset cgroup for some critical tasks, > but it is not so convenient to traverse the percpu data of this cpuset > cgroup. We have to do it as follows for this case : > > That's why we prefer to introduce a bpf_for_each_cpu helper. It is > fine if it can be implemented as a kfunc. I think open-coded-iterators is the only acceptable path forward here. Since existing bpf_iter_num doesn't fit due to sparse cpumask, let's introduce bpf_iter_cpumask and few additional kfuncs that return cpu_possible_mask and others. We already have some cpumask support in kernel/bpf/cpumask.c bpf_iter_cpumask will be a natural follow up.