Hi Andrii Nakryiko, >> >> bpf_dump_raw_ok() depends on kallsyms_show_value() and we already >> have a false definition for the !CONFIG_KALLSYMS case. But we'll >> soon expand on kallsyms_show_value() and so to make the code >> easier to follow just provide a direct !CONFIG_KALLSYMS definition >> for bpf_dump_raw_ok() as well. > > I'm sorry, I'm failing to follow the exact reasoning about > simplification. It seems simpler to have > > static inline bool kallsyms_show_value(const struct cred *cred) > { > return false; > } > > and control it from kallsyms-related internal header, rather than > adding CONFIG_KALLSYMS ifdef-ery to include/linux/filter.h and > redefining that `return false` decision. What if in the future we > decide that if !CONFIG_KALLSYMS it's ok to show raw addresses, now > we'll have to remember to update it in two places. > > Unless I'm missing some other complications? > Patch 3/3 does the same, it extends functionality of kallsyms_show_value() in case of !CONFIG_KALLSYMS. All other users likes modules code, kprobe codes are using this API for sanity/permission, and then prints the address like below: static int kprobe_blacklist_seq_show(struct seq_file *m, void *v) { ... if (!kallsyms_show_value(m->file->f_cred)) seq_printf(m, "0x%px-0x%px\t%ps\n", NULL, NULL, (void *)ent->start_addr); else seq_printf(m, "0x%px-0x%px\t%ps\n", (void *)ent->start_addr, (void *)ent->end_addr, (void *)ent->start_addr); .. } so there will be no issues if we move kallsyms_show_value() out of KALLSYMS dependency. and these codes will work in case of !KALLSYMS also. but BPF code logic was complex and seems this API was used as checking for whether KALLSYMS is enabled or not as per comment in bpf_dump_raw_ok(): /* * Reconstruction of call-sites is dependent on kallsyms, * thus make dump the same restriction. */ also as per below code: (we were not sure whether BPF will work or not with patch 3/3 because of no expertise on BPF code, so we keep the behaviour same) if (ulen) { if (bpf_dump_raw_ok(file->f_cred)) { unsigned long ksym_addr; u64 __user *user_ksyms; u32 i; /* copy the address of the kernel symbol * corresponding to each function */ ulen = min_t(u32, info.nr_jited_ksyms, ulen); user_ksyms = u64_to_user_ptr(info.jited_ksyms); if (prog->aux->func_cnt) { for (i = 0; i < ulen; i++) { ... } earlier conversation for this change: https://lkml.org/lkml/2022/4/13/326 here Petr CC'ed BPF maintainers to know their opinion whether BPF code can work with patch 3/3, if not then we need this patch. Thanks, Maninder Singh