On Wed, Feb 28, 2024 at 6:16 PM Yafang Shao <laoar.shao@xxxxxxxxx> wrote: > > On Wed, Feb 28, 2024 at 2:04 PM Andrii Nakryiko > <andrii.nakryiko@xxxxxxxxx> wrote: > > > > On Tue, Feb 27, 2024 at 6:25 PM Yafang Shao <laoar.shao@xxxxxxxxx> wrote: > > > > > > On Wed, Feb 28, 2024 at 9:24 AM Andrii Nakryiko > > > <andrii.nakryiko@xxxxxxxxx> wrote: > > > > > > > > On Sun, Feb 25, 2024 at 2:07 AM Yafang Shao <laoar.shao@xxxxxxxxx> wrote: > > > > > > > > > > Add three new kfuncs for the bits iterator: > > > > > - bpf_iter_bits_new > > > > > Initialize a new bits iterator for a given memory area. Due to the > > > > > limitation of bpf memalloc, the max number of bits that can be iterated > > > > > over is limited to (4096 * 8). > > > > > - bpf_iter_bits_next > > > > > Get the next bit in a bpf_iter_bits > > > > > - bpf_iter_bits_destroy > > > > > Destroy a bpf_iter_bits > > > > > > > > > > The bits iterator facilitates the iteration of the bits of a memory area, > > > > > such as cpumask. It can be used in any context and on any address. > > > > > > > > > > Signed-off-by: Yafang Shao <laoar.shao@xxxxxxxxx> > > > > > --- > > > > > kernel/bpf/helpers.c | 100 +++++++++++++++++++++++++++++++++++++++++++ > > > > > 1 file changed, 100 insertions(+) > > > > > > > > > > diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c > > > > > index 93edf730d288..052f63891834 100644 > > > > > --- a/kernel/bpf/helpers.c > > > > > +++ b/kernel/bpf/helpers.c > > > > > @@ -2542,6 +2542,103 @@ __bpf_kfunc void bpf_throw(u64 cookie) > > > > > WARN(1, "A call to BPF exception callback should never return\n"); > > > > > } > > > > > > > > > > +struct bpf_iter_bits { > > > > > + __u64 __opaque[2]; > > > > > +} __aligned(8); > > > > > + > > > > > +struct bpf_iter_bits_kern { > > > > > + unsigned long *bits; > > > > > + u32 nr_bits; > > > > > + int bit; > > > > > +} __aligned(8); > > > > > + > > > > > +/** > > > > > + * bpf_iter_bits_new() - Initialize a new bits iterator for a given memory area > > > > > + * @it: The new bpf_iter_bits to be created > > > > > + * @unsafe_ptr__ign: A ponter pointing to a memory area to be iterated over > > > > > + * @nr_bits: The number of bits to be iterated over. Due to the limitation of > > > > > + * memalloc, it can't greater than (4096 * 8). > > > > > + * > > > > > + * This function initializes a new bpf_iter_bits structure for iterating over > > > > > + * a memory area which is specified by the @unsafe_ptr__ign and @nr_bits. It > > > > > + * copy the data of the memory area to the newly created bpf_iter_bits @it for > > > > > + * subsequent iteration operations. > > > > > + * > > > > > + * On success, 0 is returned. On failure, ERR is returned. > > > > > + */ > > > > > +__bpf_kfunc int > > > > > +bpf_iter_bits_new(struct bpf_iter_bits *it, const void *unsafe_ptr__ign, u32 nr_bits) > > > > > +{ > > > > > + struct bpf_iter_bits_kern *kit = (void *)it; > > > > > + u32 size = BITS_TO_BYTES(nr_bits); > > > > > + int err; > > > > > + > > > > > + BUILD_BUG_ON(sizeof(struct bpf_iter_bits_kern) != sizeof(struct bpf_iter_bits)); > > > > > + BUILD_BUG_ON(__alignof__(struct bpf_iter_bits_kern) != > > > > > + __alignof__(struct bpf_iter_bits)); > > > > > + > > > > > + if (!unsafe_ptr__ign || !nr_bits) { > > > > > + kit->bits = NULL; > > > > > + return -EINVAL; > > > > > + } > > > > > + > > > > > + kit->bits = bpf_mem_alloc(&bpf_global_ma, size); > > > > > + if (!kit->bits) > > > > > + return -ENOMEM; > > > > > > > > it's probably going to be a pretty common case to do bits iteration > > > > for nr_bits<=64, right? > > > > > > It's highly unlikely. > > > Consider the CPU count as an example; There are 256 CPUs on our AMD > > > EPYC servers. > > > > Also consider u64-based bit masks (like struct backtrack_state in > > verifier code, which has u32 reg_mask and u64 stack_mask). This > > iterator is a generic bits iterator, there are tons of cases of > > u64/u32 masks in practice. > > Should we optimize it as follows? > > if (nr_bits <= 64) { > // do the optimization > } else { > // fallback to memalloc > } > Yep, that's what I'm proposing > > > > > > > > > So as an optimization, instead of doing > > > > bpf_mem_alloc() for this case, you can just copy up to 8 bytes and > > > > store it in a union of `unsigned long *bits` and `unsigned long > > > > bits_copy`. As a performance optimization (and to reduce dependency on > > > > memory allocation). WDYT? > > > > > > > > > > -- > > > Regards > > > Yafang > > > > -- > Regards > Yafang