Re: [RFC bpf-next 1/4] bpf: cgroup_sock lsm flavor

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Feb 15, 2022 at 04:12:38PM -0800, Stanislav Fomichev wrote:
>  {
> @@ -1767,14 +1769,23 @@ static int invoke_bpf_prog(const struct btf_func_model *m, u8 **pprog,
>  
>  	/* arg1: lea rdi, [rbp - stack_size] */
>  	EMIT4(0x48, 0x8D, 0x7D, -stack_size);
> -	/* arg2: progs[i]->insnsi for interpreter */
> -	if (!p->jited)
> -		emit_mov_imm64(&prog, BPF_REG_2,
> -			       (long) p->insnsi >> 32,
> -			       (u32) (long) p->insnsi);
> -	/* call JITed bpf program or interpreter */
> -	if (emit_call(&prog, p->bpf_func, prog))
> -		return -EINVAL;
> +
> +	if (p->expected_attach_type == BPF_LSM_CGROUP_SOCK) {
> +		/* arg2: progs[i] */
> +		emit_mov_imm64(&prog, BPF_REG_2, (long) p >> 32, (u32) (long) p);
> +		if (emit_call(&prog, __cgroup_bpf_run_lsm_sock, prog))
> +			return -EINVAL;
> +	} else {
> +		/* arg2: progs[i]->insnsi for interpreter */
> +		if (!p->jited)
> +			emit_mov_imm64(&prog, BPF_REG_2,
> +				       (long) p->insnsi >> 32,
> +				       (u32) (long) p->insnsi);
> +
> +		/* call JITed bpf program or interpreter */
> +		if (emit_call(&prog, p->bpf_func, prog))
> +			return -EINVAL;

Overall I think it's a workable solution.
As far as mechanism I think it would be better
to allocate single dummy bpf_prog and use normal fmod_ret
registration mechanism instead of hacking arch trampoline bits.
Set dummy_bpf_prog->bpf_func = __cgroup_bpf_run_lsm_sock;
and keep as dummy_bpf_prog->jited = false;

[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux