On 1/7/20 2:15 PM, Jiri Olsa wrote:
On Tue, Jan 07, 2020 at 09:30:12AM +0100, Daniel Borkmann wrote:
On 1/7/20 12:46 AM, Alexei Starovoitov wrote:
On Sun, Dec 29, 2019 at 03:37:40PM +0100, Jiri Olsa wrote:
When unwinding the stack we need to identify each
address to successfully continue. Adding latch tree
to keep trampolines for quick lookup during the
unwind.
Signed-off-by: Jiri Olsa <jolsa@xxxxxxxxxx>
...
+bool is_bpf_trampoline(void *addr)
+{
+ return latch_tree_find(addr, &tree, &tree_ops) != NULL;
+}
+
struct bpf_trampoline *bpf_trampoline_lookup(u64 key)
{
struct bpf_trampoline *tr;
@@ -65,6 +98,7 @@ struct bpf_trampoline *bpf_trampoline_lookup(u64 key)
for (i = 0; i < BPF_TRAMP_MAX; i++)
INIT_HLIST_HEAD(&tr->progs_hlist[i]);
tr->image = image;
+ latch_tree_insert(&tr->tnode, &tree, &tree_ops);
Thanks for the fix. I was thinking to apply it, but then realized that bpf
dispatcher logic has the same issue.
Could you generalize the fix for both?
May be bpf_jit_alloc_exec_page() can do latch_tree_insert() ?
and new version of bpf_jit_free_exec() is needed that will do latch_tree_erase().
Wdyt?
Also this patch is buggy since your latch lookup happens under RCU, but
I don't see anything that waits a grace period once you remove from the
tree. Instead you free the trampoline right away.
thanks, did not think of that.. will (try to) fix ;-)
On a different question, given we have all the kallsym infrastructure
for BPF already in place, did you look into whether it's feasible to
make it a bit more generic to also cover JITed buffers from trampolines?
hum, it did not occur to me that we want to see it in kallsyms,
but sure.. how about: bpf_trampoline_<key> ?
key would be taken from bpf_trampoline::key as function's BTF id
Yeap, I think bpf_trampoline_<btf_id> would make sense here.
Thanks,
Daniel