Re: [PATCH bpf-next 1/2] bpf: Introduce bpf_can_loop() kfunc

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, 2024-02-21 at 22:33 -0800, Alexei Starovoitov wrote:
> From: Alexei Starovoitov <ast@xxxxxxxxxx>
> 
> While working on bpf_arena the following monster macro had to be
> used to iterate a link list:
>         for (struct bpf_iter_num ___it __attribute__((aligned(8),                       \
>                                                       cleanup(bpf_iter_num_destroy))),  \
>                         * ___tmp = (                                                    \
>                                 bpf_iter_num_new(&___it, 0, (1000000)),                 \
>                                 pos = list_entry_safe((head)->first,                    \
>                                                       typeof(*(pos)), member),          \
>                                 (void)bpf_iter_num_destroy, (void *)0);                 \
>              bpf_iter_num_next(&___it) && pos &&                                        \
>                 ({ ___tmp = (void *)pos->member.next; 1; });                            \
>              pos = list_entry_safe((void __arena *)___tmp, typeof(*(pos)), member))
> 
> It's similar to bpf_for(), bpf_repeat() macros.
> Unfortunately every "for" in normal C code needs an equivalent monster macro.

Tbh, I really don't like the idea of adding more hacks for loop handling.
This would be the fourth: regular bounded loops, bpf_loop, iterators
and now bpf_can_loop. Imo, it would be better to:
- either create a reusable "semi-monster" macro e.g. "__with_bound(iter_name)"
  that would hide "struct bpf_iter_num iter_name ... cleanup ..." declaration
  to simplify definitions of other iterating macro;
- or add kfuncs for list iterator.

Having said that, I don't see any obvious technical issues with this particular patch.
A few nits below.

[...]

> @@ -7954,10 +7956,14 @@ static int process_iter_next_call(struct bpf_verifier_env *env, int insn_idx,
>  	struct bpf_reg_state *cur_iter, *queued_iter;
>  	int iter_frameno = meta->iter.frameno;
>  	int iter_spi = meta->iter.spi;
> +	bool is_can_loop = is_can_loop_kfunc(meta);
>  
>  	BTF_TYPE_EMIT(struct bpf_iter);
>  
> -	cur_iter = &env->cur_state->frame[iter_frameno]->stack[iter_spi].spilled_ptr;
> +	if (is_can_loop)
> +		cur_iter = &cur_st->can_loop_reg;
> +	else
> +		cur_iter = &cur_st->frame[iter_frameno]->stack[iter_spi].spilled_ptr;

I think that adding of a utility function hiding this choice, e.g.:

    get_iter_reg(struct bpf_verifier_state *st, int insn_idx)

would simplify the code a bit, here and in is_state_visited().

[...]

> @@ -19549,6 +19608,8 @@ static int do_misc_fixups(struct bpf_verifier_env *env)
>  			delta	 += cnt - 1;
>  			env->prog = prog = new_prog;
>  			insn	  = new_prog->insnsi + i + delta;
> +			if (stack_extra)
> +				stack_depth_extra = max(stack_depth_extra, stack_extra);

I don't think that using "max" here makes much sense.
If several sorts of kfuncs would need extra stack space,
most likely this space won't be shared, e.g. bpf_can_loop() would need
it's 8 bytes + bpf_some_new_feature() would need 16 bytes on top of that etc.
So some kind of flags indicating space for which features is reacquired is needed here.

[...]





[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux