On 2024/5/9 08:24, Martin KaFai Lau wrote:
On 5/6/24 4:29 PM, Alexei Starovoitov wrote:
On Mon, May 6, 2024 at 2:39 PM Martin KaFai Lau <martin.lau@xxxxxxxxx>
wrote:
On 4/30/24 5:18 AM, Philo Lu wrote:
Making tp_btf able to use bpf_dynptr_from_skb(), which is useful for
skb
parsing, especially for non-linear paged skb data. This is achieved by
adding KF_TRUSTED_ARGS flag to bpf_dynptr_from_skb and registering it
for TRACING progs. With KF_TRUSTED_ARGS, args from fentry/fexit are
excluded, so that unsafe progs like fexit/__kfree_skb are not allowed.
We also need the skb dynptr to be read-only in tp_btf. Because
may_access_direct_pkt_data() returns false by default when checking
bpf_dynptr_from_skb, there is no need to add BPF_PROG_TYPE_TRACING
to it
explicitly.
Signed-off-by: Philo Lu <lulie@xxxxxxxxxxxxxxxxx>
---
net/core/filter.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/net/core/filter.c b/net/core/filter.c
index 786d792ac816..399492970b8c 100644
--- a/net/core/filter.c
+++ b/net/core/filter.c
@@ -11990,7 +11990,7 @@ int bpf_dynptr_from_skb_rdonly(struct
sk_buff *skb, u64 flags,
}
BTF_KFUNCS_START(bpf_kfunc_check_set_skb)
-BTF_ID_FLAGS(func, bpf_dynptr_from_skb)
+BTF_ID_FLAGS(func, bpf_dynptr_from_skb, KF_TRUSTED_ARGS)
I can see the usefulness of having the same way parsing the header as
the
tc-bpf. However, it implicitly means the skb->data and skb_shinfo are
trusted
also. afaik, it should be as long as skb is not NULL.
From looking at include/trace/events, there is case that skb is
NULL. e.g.
tcp_send_reset. It is not something new though, e.g. using skb->sk in
the tp_btf
could be bad already. This should be addressed before allowing more
kfunc/helper.
Good catch.
We need to fix this part first:
if (prog_args_trusted(prog))
info->reg_type |= PTR_TRUSTED;
Brute force fix by adding PTR_MAYBE_NULL is probably overkill.
I suspect passing NULL into tracepoint is more of an exception than
the rule.
Maybe we can use kfunc's "__" suffix approach for tracepoint args?
[43947] FUNC_PROTO '(anon)' ret_type_id=0 vlen=4
'__data' type_id=10
'sk' type_id=3434
'skb' type_id=2386
'reason' type_id=39860
[43948] FUNC '__bpf_trace_tcp_send_reset' type_id=43947 linkage=static
Then do:
diff --git a/include/trace/events/tcp.h b/include/trace/events/tcp.h
index 49b5ee091cf6..325e8a31729a 100644
--- a/include/trace/events/tcp.h
+++ b/include/trace/events/tcp.h
@@ -91,7 +91,7 @@ DEFINE_RST_REASON(FN, FN)
TRACE_EVENT(tcp_send_reset,
TP_PROTO(const struct sock *sk,
- const struct sk_buff *skb,
+ const struct sk_buff *skb__nullable,
and detect it in btf_ctx_access().
+1. It is a neat solution. Thanks for the suggestion.
Philo, can you give it a try to fix this in the next re-spin?
Glad to do that. I'll try to implement a "__nullable" suffix for tracepoint.
Thanks.