On 06/26/2019 08:12 AM, Andrii Nakryiko wrote: > BPF_MAP_TYPE_PERF_EVENT_ARRAY map is often used to send data from BPF program > to user space for additional processing. libbpf already has very low-level API > to read single CPU perf buffer, bpf_perf_event_read_simple(), but it's hard to > use and requires a lot of code to set everything up. This patch adds > perf_buffer abstraction on top of it, abstracting setting up and polling > per-CPU logic into simple and convenient API, similar to what BCC provides. > > perf_buffer__new() sets up per-CPU ring buffers and updates corresponding BPF > map entries. It accepts two user-provided callbacks: one for handling raw > samples and one for get notifications of lost samples due to buffer overflow. > > perf_buffer__poll() is used to fetch ring buffer data across all CPUs, > utilizing epoll instance. > > perf_buffer__free() does corresponding clean up and unsets FDs from BPF map. > > All APIs are not thread-safe. User should ensure proper locking/coordination if > used in multi-threaded set up. > > Signed-off-by: Andrii Nakryiko <andriin@xxxxxx> Aside from current feedback, this series generally looks great! Two small things: > diff --git a/tools/lib/bpf/libbpf.map b/tools/lib/bpf/libbpf.map > index 2382fbda4cbb..10f48103110a 100644 > --- a/tools/lib/bpf/libbpf.map > +++ b/tools/lib/bpf/libbpf.map > @@ -170,13 +170,16 @@ LIBBPF_0.0.4 { > btf_dump__dump_type; > btf_dump__free; > btf_dump__new; > - btf__parse_elf; > bpf_object__load_xattr; > bpf_program__attach_kprobe; > bpf_program__attach_perf_event; > bpf_program__attach_raw_tracepoint; > bpf_program__attach_tracepoint; > bpf_program__attach_uprobe; > + btf__parse_elf; > libbpf_num_possible_cpus; > libbpf_perf_event_disable_and_close; > + perf_buffer__free; > + perf_buffer__new; > + perf_buffer__poll; We should prefix with libbpf_* given it's not strictly BPF-only and rather helper function. Also, we should convert bpftool (tools/bpf/bpftool/map_perf_ring.c) to make use of these new helpers instead of open-coding there. Thanks, Daniel