Re: [PATCH sched_ext/for-6.15 v3 3/5] sched_ext: Add scx_kfunc_ids_ops_context_sensitive for unified filtering of context-sensitive SCX kfuncs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

On Wed, Feb 26, 2025 at 07:28:18PM +0000, Juntong Deng wrote:
> +BTF_KFUNCS_START(scx_kfunc_ids_ops_context_sensitive)
> +/* scx_kfunc_ids_select_cpu */
> +BTF_ID_FLAGS(func, scx_bpf_select_cpu_dfl, KF_RCU)
> +/* scx_kfunc_ids_enqueue_dispatch */
> +BTF_ID_FLAGS(func, scx_bpf_dsq_insert, KF_RCU)
> +BTF_ID_FLAGS(func, scx_bpf_dsq_insert_vtime, KF_RCU)
> +BTF_ID_FLAGS(func, scx_bpf_dispatch, KF_RCU)
> +BTF_ID_FLAGS(func, scx_bpf_dispatch_vtime, KF_RCU)
> +/* scx_kfunc_ids_dispatch */
> +BTF_ID_FLAGS(func, scx_bpf_dispatch_nr_slots)
> +BTF_ID_FLAGS(func, scx_bpf_dispatch_cancel)
> +BTF_ID_FLAGS(func, scx_bpf_dsq_move_to_local)
> +BTF_ID_FLAGS(func, scx_bpf_consume)
> +/* scx_kfunc_ids_cpu_release */
> +BTF_ID_FLAGS(func, scx_bpf_reenqueue_local)
> +/* scx_kfunc_ids_unlocked */
> +BTF_ID_FLAGS(func, scx_bpf_create_dsq, KF_SLEEPABLE)
> +/* Intersection of scx_kfunc_ids_dispatch and scx_kfunc_ids_unlocked */
> +BTF_ID_FLAGS(func, scx_bpf_dsq_move_set_slice)
> +BTF_ID_FLAGS(func, scx_bpf_dsq_move_set_vtime)
> +BTF_ID_FLAGS(func, scx_bpf_dsq_move, KF_RCU)
> +BTF_ID_FLAGS(func, scx_bpf_dsq_move_vtime, KF_RCU)
> +BTF_ID_FLAGS(func, scx_bpf_dispatch_from_dsq_set_slice)
> +BTF_ID_FLAGS(func, scx_bpf_dispatch_from_dsq_set_vtime)
> +BTF_ID_FLAGS(func, scx_bpf_dispatch_from_dsq, KF_RCU)
> +BTF_ID_FLAGS(func, scx_bpf_dispatch_vtime_from_dsq, KF_RCU)
> +BTF_KFUNCS_END(scx_kfunc_ids_ops_context_sensitive)

I'm not a big fan of repeating the kfuncs. This is going to be error-prone.
Can't it register and test the existing sets in the filter function instead?
If that's too cumbersome, maybe we can put these in a macro so that we don't
have to repeat the functions?

> +static int scx_kfunc_ids_ops_context_sensitive_filter(const struct bpf_prog *prog, u32 kfunc_id)
> +{
> +	u32 moff, flags;
> +
> +	if (!btf_id_set8_contains(&scx_kfunc_ids_ops_context_sensitive, kfunc_id))
> +		return 0;
> +
> +	if (prog->type == BPF_PROG_TYPE_SYSCALL &&
> +	    btf_id_set8_contains(&scx_kfunc_ids_unlocked, kfunc_id))
> +		return 0;

Not from this change but these can probably be allowed from TRACING too.

> +	if (prog->type == BPF_PROG_TYPE_STRUCT_OPS &&
> +	    prog->aux->st_ops != &bpf_sched_ext_ops)
> +		return 0;

Why can't other struct_ops progs call scx_kfunc_ids_unlocked kfuncs?

> +	/* prog->type == BPF_PROG_TYPE_STRUCT_OPS && prog->aux->st_ops == &bpf_sched_ext_ops*/
> +
> +	moff = prog->aux->attach_st_ops_member_off;
> +	flags = scx_ops_context_flags[SCX_MOFF_IDX(moff)];
> +
> +	if ((flags & SCX_OPS_KF_UNLOCKED) &&
> +	    btf_id_set8_contains(&scx_kfunc_ids_unlocked, kfunc_id))
> +		return 0;

Wouldn't this disallow e.g. ops.dispatch() from calling scx_dsq_move()?

Have you tested that the before and after behaviors match?

Thanks.

-- 
tejun




[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux