Re: [PATCH 5/6] bpf: Allow to resolve bpf trampoline and dispatcher in unwind

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 1/18/20 2:49 PM, Jiri Olsa wrote:
When unwinding the stack we need to identify each address
to successfully continue. Adding latch tree to keep trampolines
for quick lookup during the unwind.

The patch uses first 48 bytes for latch tree node, leaving 4048
bytes from the rest of the page for trampoline or dispatcher
generated code.

It's still enough not to affect trampoline and dispatcher progs
maximum counts.

Signed-off-by: Jiri Olsa <jolsa@xxxxxxxxxx>
---
  include/linux/bpf.h     | 12 ++++++-
  kernel/bpf/core.c       |  2 ++
  kernel/bpf/dispatcher.c |  4 +--
  kernel/bpf/trampoline.c | 76 +++++++++++++++++++++++++++++++++++++----
  4 files changed, 84 insertions(+), 10 deletions(-)

diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index 8e3b8f4ad183..41eb0cf663e8 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -519,7 +519,6 @@ struct bpf_trampoline *bpf_trampoline_lookup(u64 key);
  int bpf_trampoline_link_prog(struct bpf_prog *prog);
  int bpf_trampoline_unlink_prog(struct bpf_prog *prog);
  void bpf_trampoline_put(struct bpf_trampoline *tr);
-void *bpf_jit_alloc_exec_page(void);
  #define BPF_DISPATCHER_INIT(name) {			\
  	.mutex = __MUTEX_INITIALIZER(name.mutex),	\
  	.func = &name##func,				\
@@ -551,6 +550,13 @@ void *bpf_jit_alloc_exec_page(void);
  #define BPF_DISPATCHER_PTR(name) (&name)
  void bpf_dispatcher_change_prog(struct bpf_dispatcher *d, struct bpf_prog *from,
  				struct bpf_prog *to);
+struct bpf_image {
+	struct latch_tree_node tnode;
+	unsigned char data[];
+};
+#define BPF_IMAGE_SIZE (PAGE_SIZE - sizeof(struct bpf_image))
+bool is_bpf_image(void *addr);
+void *bpf_image_alloc(void);
  #else
  static inline struct bpf_trampoline *bpf_trampoline_lookup(u64 key)
  {
@@ -572,6 +578,10 @@ static inline void bpf_trampoline_put(struct bpf_trampoline *tr) {}
  static inline void bpf_dispatcher_change_prog(struct bpf_dispatcher *d,
  					      struct bpf_prog *from,
  					      struct bpf_prog *to) {}
+static inline bool is_bpf_image(void *addr)
+{
+	return false;
+}
  #endif
struct bpf_func_info_aux {
diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
index 29d47aae0dd1..b3299dc9adda 100644
--- a/kernel/bpf/core.c
+++ b/kernel/bpf/core.c
@@ -704,6 +704,8 @@ bool is_bpf_text_address(unsigned long addr)
rcu_read_lock();
  	ret = bpf_prog_kallsyms_find(addr) != NULL;
+	if (!ret)
+		ret = is_bpf_image((void *) addr);
  	rcu_read_unlock();

Btw, shouldn't this be a separate entity entirely to avoid unnecessary inclusion
in bpf_arch_text_poke() for the is_bpf_text_address() check there?

Did you drop the bpf_{trampoline,dispatcher}_<...> entry addition in kallsyms?

Thanks,
Daniel



[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux