On Wed, Mar 12, 2025 at 2:40 PM Andrii Nakryiko <andrii.nakryiko@xxxxxxxxx> wrote: > > On Tue, Mar 11, 2025 at 6:04 PM Jordan Rome <linux@xxxxxxxxxxxxxx> wrote: > > > > For kernels where btf is not mandatory > > we should log loading errors with `pr_info` > > and not retry where we increase the log level > > as this is just added noise. > > > > Signed-off-by: Jordan Rome <linux@xxxxxxxxxxxxxx> > > --- > > tools/lib/bpf/btf.c | 36 ++++++++++++++++++--------------- > > tools/lib/bpf/libbpf.c | 3 ++- > > tools/lib/bpf/libbpf_internal.h | 2 +- > > 3 files changed, 23 insertions(+), 18 deletions(-) > > > > diff --git a/tools/lib/bpf/btf.c b/tools/lib/bpf/btf.c > > index eea99c766a20..7da4807451bb 100644 > > --- a/tools/lib/bpf/btf.c > > +++ b/tools/lib/bpf/btf.c > > @@ -1379,9 +1379,10 @@ static void *btf_get_raw_data(const struct btf *btf, __u32 *size, bool swap_endi > > > > int btf_load_into_kernel(struct btf *btf, > > char *log_buf, size_t log_sz, __u32 log_level, > > - int token_fd) > > + int token_fd, bool btf_mandatory) > > { > > LIBBPF_OPTS(bpf_btf_load_opts, opts); > > + enum libbpf_print_level print_level; > > __u32 buf_sz = 0, raw_size; > > char *buf = NULL, *tmp; > > void *raw_data; > > @@ -1435,22 +1436,25 @@ int btf_load_into_kernel(struct btf *btf, > > > > btf->fd = bpf_btf_load(raw_data, raw_size, &opts); > > if (btf->fd < 0) { > > - /* time to turn on verbose mode and try again */ > > - if (log_level == 0) { > > - log_level = 1; > > - goto retry_load; > > + if (btf_mandatory) { > > + /* time to turn on verbose mode and try again */ > > + if (log_level == 0) { > > + log_level = 1; > > + goto retry_load; > > + } > > + /* only retry if caller didn't provide custom log_buf, but > > + * make sure we can never overflow buf_sz > > + */ > > + if (!log_buf && errno == ENOSPC && buf_sz <= UINT_MAX / 2) > > Original behavior was to go from log_level 0 to log_level 1 when the > user provided custom log_buf, which would happen even for > non-btf_mandatory case. I'd like to not change that behavior. > I don't quite understand why we want to increase the log level if btf is not mandatory. Users will still get an info message that btf failed to load and if they are still curious, they can increase the log level themselves right? The goal of this patch is to reduce log noise in cases where btf fails to load and is not mandatory. > Did you find some problem with the code I proposed a few emails back? Truth be told, I didn't like the added complexity in the conditionals. I tried something similar in an earlier version and it led to a SEGFAULT when trying to access `buf[0]` which had not been allocated. > If not, why not do that instead and preserve that custom log_buf and > log_level upgrade behavior? > > pw-bot: cr > > > + goto retry_load; > > } > > - /* only retry if caller didn't provide custom log_buf, but > > - * make sure we can never overflow buf_sz > > - */ > > - if (!log_buf && errno == ENOSPC && buf_sz <= UINT_MAX / 2) > > - goto retry_load; > > - > > err = -errno; > > - pr_warn("BTF loading error: %s\n", errstr(err)); > > - /* don't print out contents of custom log_buf */ > > - if (!log_buf && buf[0]) > > - pr_warn("-- BEGIN BTF LOAD LOG ---\n%s\n-- END BTF LOAD LOG --\n", buf); > > + print_level = btf_mandatory ? LIBBPF_WARN : LIBBPF_INFO; > > + __pr(print_level, "BTF loading error: %s\n", errstr(err)); > > + if (!log_buf && log_level) > > + __pr(print_level, > > + "-- BEGIN BTF LOAD LOG ---\n%s\n-- END BTF LOAD LOG --\n", > > + buf); > > } > > > > done: > > @@ -1460,7 +1464,7 @@ int btf_load_into_kernel(struct btf *btf, > > > > int btf__load_into_kernel(struct btf *btf) > > { > > - return btf_load_into_kernel(btf, NULL, 0, 0, 0); > > + return btf_load_into_kernel(btf, NULL, 0, 0, 0, true); > > } > > > > int btf__fd(const struct btf *btf) > > diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c > > index 8e32286854ef..2cb3f067a12e 100644 > > --- a/tools/lib/bpf/libbpf.c > > +++ b/tools/lib/bpf/libbpf.c > > @@ -3604,9 +3604,10 @@ static int bpf_object__sanitize_and_load_btf(struct bpf_object *obj) > > */ > > btf__set_fd(kern_btf, 0); > > } else { > > + btf_mandatory = kernel_needs_btf(obj); > > /* currently BPF_BTF_LOAD only supports log_level 1 */ > > err = btf_load_into_kernel(kern_btf, obj->log_buf, obj->log_size, > > - obj->log_level ? 1 : 0, obj->token_fd); > > + obj->log_level ? 1 : 0, obj->token_fd, btf_mandatory); > > } > > if (sanitize) { > > if (!err) { > > diff --git a/tools/lib/bpf/libbpf_internal.h b/tools/lib/bpf/libbpf_internal.h > > index de498e2dd6b0..f1de2ba462c3 100644 > > --- a/tools/lib/bpf/libbpf_internal.h > > +++ b/tools/lib/bpf/libbpf_internal.h > > @@ -408,7 +408,7 @@ int libbpf__load_raw_btf(const char *raw_types, size_t types_len, > > int token_fd); > > int btf_load_into_kernel(struct btf *btf, > > char *log_buf, size_t log_sz, __u32 log_level, > > - int token_fd); > > + int token_fd, bool btf_mandatory); > > > > struct btf *btf_get_from_fd(int btf_fd, struct btf *base_btf); > > void btf_get_kernel_prefix_kind(enum bpf_attach_type attach_type, > > -- > > 2.47.1 > >