On 7/1/20 7:16 PM, Daniel T. Lee wrote:
Currently, BPF programs with kprobe/sys_connect does not work properly.
Commit 34745aed515c ("samples/bpf: fix kprobe attachment issue on x64")
This commit modifies the bpf_load behavior of kprobe events in the x64
architecture. If the current kprobe event target starts with "sys_*",
add the prefix "__x64_" to the front of the event.
Appending "__x64_" prefix with kprobe/sys_* event was appropriate as a
solution to most of the problems caused by the commit below.
commit d5a00528b58c ("syscalls/core, syscalls/x86: Rename struct
pt_regs-based sys_*() to __x64_sys_*()")
However, there is a problem with the sys_connect kprobe event that does
not work properly. For __sys_connect event, parameters can be fetched
normally, but for __x64_sys_connect, parameters cannot be fetched.
Because of this problem, this commit fixes the sys_connect event by
specifying the __sys_connect directly and this will bypass the
"__x64_" appending rule of bpf_load.
In the kernel code, we have
SYSCALL_DEFINE3(connect, int, fd, struct sockaddr __user *, uservaddr,
int, addrlen)
{
return __sys_connect(fd, uservaddr, addrlen);
}
Depending on compiler, there is no guarantee that __sys_connect will
not be inlined. I would prefer to still use the entry point
__x64_sys_* e.g.,
SEC("kprobe/" SYSCALL(sys_write))
Fixes: 34745aed515c ("samples/bpf: fix kprobe attachment issue on x64")
Signed-off-by: Daniel T. Lee <danieltimlee@xxxxxxxxx>
---
samples/bpf/map_perf_test_kern.c | 2 +-
samples/bpf/test_map_in_map_kern.c | 2 +-
samples/bpf/test_probe_write_user_kern.c | 2 +-
3 files changed, 3 insertions(+), 3 deletions(-)
diff --git a/samples/bpf/map_perf_test_kern.c b/samples/bpf/map_perf_test_kern.c
index 12e91ae64d4d..cebe2098bb24 100644
--- a/samples/bpf/map_perf_test_kern.c
+++ b/samples/bpf/map_perf_test_kern.c
@@ -154,7 +154,7 @@ int stress_percpu_hmap_alloc(struct pt_regs *ctx)
return 0;
}
-SEC("kprobe/sys_connect")
+SEC("kprobe/__sys_connect")
int stress_lru_hmap_alloc(struct pt_regs *ctx)
{
char fmt[] = "Failed at stress_lru_hmap_alloc. ret:%dn";
diff --git a/samples/bpf/test_map_in_map_kern.c b/samples/bpf/test_map_in_map_kern.c
index 6cee61e8ce9b..b1562ba2f025 100644
--- a/samples/bpf/test_map_in_map_kern.c
+++ b/samples/bpf/test_map_in_map_kern.c
@@ -102,7 +102,7 @@ static __always_inline int do_inline_hash_lookup(void *inner_map, u32 port)
return result ? *result : -ENOENT;
}
-SEC("kprobe/sys_connect")
+SEC("kprobe/__sys_connect")
int trace_sys_connect(struct pt_regs *ctx)
{
struct sockaddr_in6 *in6;
diff --git a/samples/bpf/test_probe_write_user_kern.c b/samples/bpf/test_probe_write_user_kern.c
index 6579639a83b2..9b3c3918c37d 100644
--- a/samples/bpf/test_probe_write_user_kern.c
+++ b/samples/bpf/test_probe_write_user_kern.c
@@ -26,7 +26,7 @@ struct {
* This example sits on a syscall, and the syscall ABI is relatively stable
* of course, across platforms, and over time, the ABI may change.
*/
-SEC("kprobe/sys_connect")
+SEC("kprobe/__sys_connect")
int bpf_prog1(struct pt_regs *ctx)
{
struct sockaddr_in new_addr, orig_addr = {};