I am using a normal SEC(uprobe) in the eBPF code. The workload is ycsb
(with 1 thread) running against a cluster of 3 Redis nodes, I filter the
uprobes for 3 pids (the Redis nodes).
When I profiled the machine with perf, I could not see glaring
differences. Should I repeat this and send the .data here?
Best Regards,
Sebastião
A 2024-11-14 09:08, Jiri Olsa escreveu:
On Wed, Nov 13, 2024 at 11:33:01PM +0000, Sebastião Santos Boavida
Amaro wrote:
Hi,
I am using:
libbpf-cargo = "0.24.6"
libbpf-rs = "0.24.6"
libbpf-sys = "1.4.3"
On kernel 6.8.0-47-generic.
I contacted the libbpf-rs guys, and they told me this belonged here.
I am attaching 252 uprobes to a system, these symbols are not
regularly
called (90ish times over 9 minutes), however, when I specify a pid the
throughput drops 3 times from 12k ops/sec to 4k ops/sec. When I do not
specify a PID, and simply pass -1 the throughput remains the same (as
it
should, since 90 times is not significant to affect overhead I would
say).
It looks as if we are switching from userspace to kernel space without
triggering the uprobe.
Do not know if this is a known issue, it does not look like an
intended
behavior.
hi,
thanks for the report, I cc-ed some other folks and trace list
I'm not aware about such slowdown, I think with pid filter in place
there should be less work to do
could you please provide more details?
- do you know which uprobe interface you are using
uprobe over perf event or uprobe_multi (likely uprobe_multi,
because you said above you attach 250 probes)
- more details on the workload, like is the threads/processes,
how many and I guess you trigger bpf program
- do you filter out single pid or more
- could you profile the workload with perf
thanks,
jirka