Re: [PATCH bpf-next 1/3] bpf: support nocsr patterns for calls to kfuncs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 8/13/24 12:55 AM, Eduard Zingerman wrote:
On Mon, 2024-08-12 at 22:36 -0700, Yonghong Song wrote:

[...]

@@ -16140,6 +16140,28 @@ static bool verifier_inlines_helper_call(struct bpf_verifier_env *env, s32 imm)
   	}
   }
+/* Same as helper_nocsr_clobber_mask() but for kfuncs, see comment above */
+static u32 kfunc_nocsr_clobber_mask(struct bpf_kfunc_call_arg_meta *meta)
+{
+	const struct btf_param *params;
+	u32 vlen, i, mask;
In helper_nocsr_clobber_mask, we have u8 mask. To be consistent, can we have 'u8 mask' here?
Are you worried that the number of arguments could be more than 7? This seems not the case
right now.
Before the nocsr part for helpers landed there was a change request to
make helper_nocsr_clobber_mask() return u32. I modified the function
but forgot to change the type for 'mask' local variable.

The main point in using u32 is uniformity.
I can either change kfunc_nocsr_clobber_mask() to use u8 for mask,
or update helper_nocsr_clobber_mask() to use u32 for mask.

Changing to u32 in helper_nocsr_clobber_mask() is okay. I
just want to have consistent type for 'mask' in both functions.


+
+	params = btf_params(meta->func_proto);
+	vlen = btf_type_vlen(meta->func_proto);
+	mask = 0;
+	if (!btf_type_is_void(btf_type_by_id(meta->btf, meta->func_proto->type)))
+		mask |= BIT(BPF_REG_0);
+	for (i = 0; i < vlen; ++i)
+		mask |= BIT(BPF_REG_1 + i);
+	return mask;
+}
+
+/* Same as verifier_inlines_helper_call() but for kfuncs, see comment above */
+static bool verifier_inlines_kfunc_call(struct bpf_kfunc_call_arg_meta *meta)
+{
+	return false;
+}
+
   /* GCC and LLVM define a no_caller_saved_registers function attribute.
    * This attribute means that function scratches only some of
    * the caller saved registers defined by ABI.
@@ -16238,6 +16260,20 @@ static void mark_nocsr_pattern_for_call(struct bpf_verifier_env *env,
   				  bpf_jit_inlines_helper_call(call->imm));
   	}
+ if (bpf_pseudo_kfunc_call(call)) {
+		struct bpf_kfunc_call_arg_meta meta;
+		int err;
+
+		err = fetch_kfunc_meta(env, call, &meta, NULL);
+		if (err < 0)
+			/* error would be reported later */
+			return;
+
+		clobbered_regs_mask = kfunc_nocsr_clobber_mask(&meta);
+		can_be_inlined = (meta.kfunc_flags & KF_NOCSR) &&
+				 verifier_inlines_kfunc_call(&meta);
I think we do not need both meta.kfunc_flags & KF_NOCSR and
verifier_inlines_kfunc_call(&meta). Only one of them is enough
since they test very similar thing. You do need to ensure
kfuncs with KF_NOCSR in special_kfunc_list though.
WDYT?
I can remove the flag in favour of verifier_inlines_kfunc_call().

Sounds good to me.


+	}
+
   	if (clobbered_regs_mask == ALL_CALLER_SAVED_REGS)
   		return;





[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux