Re: [PATCH bpf-next] bpf: tcp: Improve bpf write tcp opt performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, May 16, 2024 at 11:15 AM +08, Feng Zhou wrote:
> 在 2024/5/15 17:48, Jakub Sitnicki 写道:
>> On Wed, May 15, 2024 at 04:19 PM +08, Feng zhou wrote:
>>> From: Feng Zhou <zhoufeng.zf@xxxxxxxxxxxxx>
>>>
>>> Set the full package write tcp option, the test found that the loss
>>> will be 20%. If a package wants to write tcp option, it will trigger
>>> bpf prog three times, and call "tcp_send_mss" calculate mss_cache,
>>> call "tcp_established_options" to reserve tcp opt len, call
>>> "bpf_skops_write_hdr_opt" to write tcp opt, but "tcp_send_mss" before
>>> TSO. Through bpftrace tracking, it was found that during the pressure
>>> test, "tcp_send_mss" call frequency was 90w/s. Considering that opt
>>> len does not change often, consider caching opt len for optimization.
>> You could also make your BPF sock_ops program cache the value and return
>> the cached value when called for BPF_SOCK_OPS_HDR_OPT_LEN_CB.
>> If that is in your opinion prohibitevely expensive then it would be good
>> to see a sample program and CPU cycle measurements (bpftool prog profile).
>> 
>
> I'm not referring to the overhead introduced by the time-consuming
> operation of bpf prog. I have tested that bpf prog does nothing and
> returns directly, and the loss is still 20%. During the pressure test
> process, "tcp_send_mss" and "__tcp_transmit_skb" the call frequency per
> second
>
> @[
>     bpf_skops_hdr_opt_len.isra.46+1
>     tcp_established_options+730
>     tcp_current_mss+81
>     tcp_send_mss+23
>     tcp_sendmsg_locked+285
>     tcp_sendmsg+58
>     sock_sendmsg+48
>     sock_write_iter+151
>     new_sync_write+296
>     vfs_write+165
>     ksys_write+89
>     do_syscall_64+89
>     entry_SYSCALL_64_after_hwframe+68
> ]: 3671671
>
> @[
>     bpf_skops_write_hdr_opt.isra.47+1
>     __tcp_transmit_skb+761
>     tcp_write_xmit+822
>     __tcp_push_pending_frames+52
>     tcp_close+813
>     inet_release+60
>     __sock_release+55
>     sock_close+17
>     __fput+179
>     task_work_run+112
>     exit_to_usermode_loop+245
>     do_syscall_64+456
>     entry_SYSCALL_64_after_hwframe+68
> ]: 36125
>
> "tcp_send_mss" before TSO, without packet aggregation, and
> "__tcp_transmit_skb" after TSO, the gap between the two is
> 100 times.

All right, we are getting somewhere.

So in your workload bpf_skops_hdr_opt_len more times that you like.
And you have determined that by memoizing the BPF
skops/BPF_SOCK_OPS_HDR_OPT_LEN_CB result and skipping over part of
tcp_established_options you get a performance boost.

Did you first check with perf record to which ops in
tcp_established_options are taking up so many cycles?

If it's not the BPF prog, which you have ruled out, then where are we
burining cycles? Maybe that is something that can be improved.

Also, in terms on quantifying the improvement - it is 20% in terms of
what? Throughput, pps, cycles? And was that a single data point? For
multiple measurements there must be some variance (+/- X pp).

Would be great to see some data to back it up.

[...]

>>> diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h
>>> index 90706a47f6ff..f2092de1f432 100644
>>> --- a/tools/include/uapi/linux/bpf.h
>>> +++ b/tools/include/uapi/linux/bpf.h
>>> @@ -6892,8 +6892,14 @@ enum {
>>>   	 * options first before the BPF program does.
>>>   	 */
>>>   	BPF_SOCK_OPS_WRITE_HDR_OPT_CB_FLAG = (1<<6),
>>> +	/* Fast path to reserve space in a skb under
>>> +	 * sock_ops->op == BPF_SOCK_OPS_HDR_OPT_LEN_CB.
>>> +	 * opt length doesn't change often, so it can save in the tcp_sock. And
>>> +	 * set BPF_SOCK_OPS_HDR_OPT_LEN_CACHE_CB_FLAG to no bpf call.
>>> +	 */
>>> +	BPF_SOCK_OPS_HDR_OPT_LEN_CACHE_CB_FLAG = (1<<7),
>> Have you considered a bpf_reserve_hdr_opt() flag instead?
>> An example or test coverage would to show this API extension in action
>> would help.
>> 
>
> bpf_reserve_hdr_opt () flag can't finish this. I want to optimize
> that bpf prog will not be triggered frequently before TSO. Provide
> a way for users to not trigger bpf prog when opt len is unchanged.
> Then when writing opt, if len changes, clear the flag, and then
> change opt len in the next package.

I haven't seen a sample using the API extenstion that you're proposing,
so I can only guess. But you probably have something like:

SEC("sockops")
int sockops_prog(struct bpf_sock_ops *ctx)
{
	if (ctx->op == BPF_SOCK_OPS_HDR_OPT_LEN_CB &&
	    ctx->args[0] == BPF_WRITE_HDR_TCP_CURRENT_MSS) {
		bpf_reserve_hdr_opt(ctx, N, 0);
		bpf_sock_ops_cb_flags_set(ctx,
					  ctx->bpf_sock_ops_cb_flags |
					  MY_NEW_FLAG);
		return 1;
	}
}

I don't understand why you're saying it can't be transformed into:

int sockops_prog(struct bpf_sock_ops *ctx)
{
	if (ctx->op == BPF_SOCK_OPS_HDR_OPT_LEN_CB &&
	    ctx->args[0] == BPF_WRITE_HDR_TCP_CURRENT_MSS) {
		bpf_reserve_hdr_opt(ctx, N, MY_NEW_FLAG);
		return 1;
	}
}

[...]





[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux