Re: [PATCH bpf-next] bpf: Add --skip_encoding_btf_inconsistent_proto, --btf_gen_optimized to pahole flags for v1.25

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On 5/15/23 7:53 AM, Alan Maguire wrote:
On 15/05/2023 05:06, Yonghong Song wrote:


On 5/14/23 2:49 PM, Jiri Olsa wrote:
On Sun, May 14, 2023 at 10:37:08AM -0700, Yonghong Song wrote:


On 5/12/23 7:59 PM, Yonghong Song wrote:


On 5/12/23 2:54 PM, Jiri Olsa wrote:
On Fri, May 12, 2023 at 11:59:34AM -0700, Alexei Starovoitov wrote:
On Fri, May 12, 2023 at 9:04 AM Alan Maguire
<alan.maguire@xxxxxxxxxx> wrote:

On 12/05/2023 03:51, Yafang Shao wrote:
On Wed, May 10, 2023 at 9:03 PM Alan Maguire
<alan.maguire@xxxxxxxxxx> wrote:

v1.25 of pahole supports filtering out functions
with multiple inconsistent
function prototypes or optimized-out parameters from
the BTF representation.
These present problems because there is no
additional info in BTF saying which
inconsistent prototype matches which function
instance to help guide attachment,
and functions with optimized-out parameters can lead
to incorrect assumptions
about register contents.

So for now, filter out such functions while adding
BTF representations for
functions that have "."-suffixes (foo.isra.0) but
not optimized-out parameters.
This patch assumes that below linked changes land in
pahole for v1.25.

Issues with pahole filtering being too aggressive in
removing functions
appear to be resolved now, but CI and further testing will
confirm.

Signed-off-by: Alan Maguire <alan.maguire@xxxxxxxxxx>
---
    scripts/pahole-flags.sh | 3 +++
    1 file changed, 3 insertions(+)

diff --git a/scripts/pahole-flags.sh b/scripts/pahole-flags.sh
index 1f1f1d397c39..728d55190d97 100755
--- a/scripts/pahole-flags.sh
+++ b/scripts/pahole-flags.sh
@@ -23,5 +23,8 @@ if [ "${pahole_ver}" -ge "124" ]; then
           # see PAHOLE_HAS_LANG_EXCLUDE
           extra_paholeopt="${extra_paholeopt} --lang_exclude=rust"
    fi
+if [ "${pahole_ver}" -ge "125" ]; then
+       extra_paholeopt="${extra_paholeopt}
--skip_encoding_btf_inconsistent_proto
--btf_gen_optimized"
+fi

    echo ${extra_paholeopt}
--
2.31.1


That change looks like a workaround to me.
There may be multiple functions that have the same proto, e.g.:

     $ grep -r "bpf_iter_detach_map(struct bpf_iter_aux_info \*aux)"
kernel/bpf/ net/core/
     kernel/bpf/map_iter.c:static void bpf_iter_detach_map(struct
bpf_iter_aux_info *aux)
     net/core/bpf_sk_storage.c:static void
bpf_iter_detach_map(struct
bpf_iter_aux_info *aux)

     $ bpftool btf dump file /sys/kernel/btf/vmlinux   |  grep -B 2
bpf_iter_detach_map
     [34691] FUNC_PROTO '(anon)' ret_type_id=0 vlen=1
     'aux' type_id=2638
     [34692] FUNC 'bpf_iter_detach_map' type_id=34691 linkage=static

We don't know which one it is in the BTF.
However, I'm not against this change, as it can avoid some issues.


In the above case, the BTF representation is consistent though.
That is, if I attach fentry progs to either of these functions
based on that BTF representation, nothing will crash.

That's ultimately what those changes are about; ensuring
consistency in BTF representation, so when a function is in
BTF we can know the signature of the function can be safely
used by fentry for example.

The question of being able to identify functions (as opposed
to having a consistent representation) is the next step.
Finding a way to link between kallsyms and BTF would allow us to
have multiple inconsistent functions in BTF, since we could map
from BTF -> kallsyms safely. So two functions called "foo"
with different function signatures would be okay, because
we'd know which was which in kallsyms and could attach
safely. Something like a BTF tag for the function that could
clarify that mapping - but just for cases where it would
otherwise be ambiguous - is probably the way forward
longer term.

Jiri's talking about this topic at LSF/MM/BPF this week I believe.

Jiri presented a few ideas during LSFMMBPF.

I feel the best approach is to add a set of addr-s to BTF
via a special decl_tag.
We can also consider extending KIND_FUNC.
The advantage that every BTF func will have one or more addrs
associated with it and bpf prog loading logic wouldn't need to do
fragile name comparison between btf and kallsyms.
pahole can take addrs from dwarf and optionally double check
with kallsyms.

Yonghong summed it up in another email discussion, pasting it in here:

     So overall we have three options as kallsyms representation now:
       (a) "addr module:foo:dir_a/dir_b/core.c"
       (b) "addr module:foo"
       (c) "addr module:foo:btf_id"

     option (a):
       'dir_a/dir_b/core.c' needs to be encoded in BTF.
       user space either check file path or func signature
       to find attach_btf_id and pass to the kernel.
       kernel can find file path in BTF and then lookup
       kallsyms to find addr.


I like the source-centric nature of this, but the only
danger here is we might not get a 1:1 mapping between
source file location and address; consider the case
of a static inline function in a .h file which doesn't
get inlined. It could have multiple addresses associated
with the same source. For example:

static inline void __list_del_entry(struct list_head *entry)
{
	if (!__list_del_entry_valid(entry))
		return;

	__list_del(entry->prev, entry->next);
}

$ grep __list_del_entry /proc/kallsyms
ffffffff982cc5c0 t __pfx___list_del_entry
ffffffff982cc5d0 t __list_del_entry
ffffffff985b0860 t __pfx___list_del_entry
ffffffff985b0870 t __list_del_entry
ffffffff9862d800 t __pfx___list_del_entry
ffffffff9862d810 t __list_del_entry
ffffffff987d3dd0 t __pfx___list_del_entry
ffffffff987d3de0 t __list_del_entry
ffffffff987f37a0 T __pfx___list_del_entry_valid
ffffffff987f37b0 T __list_del_entry_valid
ffffffff989fdd10 t __pfx___list_del_entry
ffffffff989fdd20 t __list_del_entry
ffffffff99baf08c r __ksymtab___list_del_entry_valid
ffffffffc12da2e0 t __list_del_entry	[bnep]
ffffffffc12da2d0 t __pfx___list_del_entry	[bnep]
ffffffffc092d6b0 t __list_del_entry	[videobuf2_common]
ffffffffc092d6a0 t __pfx___list_del_entry	[videobuf2_common]

Until now, we are okay with static inline function carried
into .o file since all inline functions are marked as notrace
(fentry cannot attach to it.)

However, now we have
https://lore.kernel.org/live-patching/20230502164102.1a51cdb4@xxxxxxxxxxxxxxxxxx/T/#u

If the patch is merged, then the above inline function
in the header survived in .o file indeed a problem.

Typically users won't trace inline functions in
the header file. But for completeness, agree that
file path may not fully reliable.


     option (b):
       "addr" needs to be encoded in BTF.
       user space checks func signature to find
       attach_btf_id and pass to the kernel.
       kernel can find addr in BTF and use it.


This seems like the safest option to me. Ideally we wouldn't
need such information for every function - only ones with
multiple sites and ambiguous signatures - but the problem
is a function could have the same name in a kernel and
a module too. So it seems like we're stuck with providing
additional info to clarify which signature goes with which
function for each static function.

When passing attach_btf_id to kernel, we have attach_btf_obj_fd
as well, which should differentiate whether it is kernel or which module it is.


     option (c):
       if user can decide which function to attach, e.g.,
       through func signature, then no BTF encoding
       is necessary. attach_btf_id is passed to the
       kernel and search kallsyms to find the matching
       btf_id and 'addr' will be available then.

     For option (b) and (c), user space needs to check
     func signature to find which btf_id to use. If
     same-name static functions having the identical
     signatures, then user space would have a hard time
     to differentiate. I think it should be very
     rare same-name static functions in the kernel will have
     identical signatures. But if we want 100% correctness,
     we may need file path in which case option (a)
     is preferable.

As Alexei mentioned in previous email, for such a extreme case,
if user is willing to go through extra step to check dwarf
to find and match file path, then (b) and (c) should work
perfectly as well.

Okay, it looks like this is more complex if the function signature is
the same. In such cases, current BTF dedup will merge these
functions as a single BTF func. In such cases, we could have:

     decl_tag_1   ----> dedup'ed static_func
                           ^
                           |
     decl_tag_2   ---------

For such cases, just passing btf_id of static func to kernel
won't work since the kernel won't be able to know which
decl_tag to be associated with.

(I did a simple test with vmlinux, it looks we have
   issues with decl_tag_1/decl_tag_2 -> dedup'ed static_func
   as well since only one of decl_tag survives.
   But this is a different issue.
)

So if we intend to add decl tag (addr or file_path), we
should not dedup static functions or generally any functions.

I did not think functions would be dedup-ed, they are ;-) with the
declaration tags in place we could perhaps switch it off, right?

That is my hope too. If with decl tag func won't be dedup'ed,
then we should be okay. In the kernel, based on attach_btf_id,
through btf, kernel can find the corresponding decl tag (file path
or addr) or through attach_btf_id itself if the btf id is
encoded in kallsym entries.


or perhaps I can't think of all the cases we need functions dedup for,
so maybe the dedup code could check also the associated decl tag when
comparing functions


Would using the BTF decl tag id instead of the function BTF id to
guide attachment be an option? That way if multiple functions shared
the same signature, we could still get the info to figure out which to
attach to from the decl tag, and we wouldn't need to mess with dedup.
I'm probably missing something which makes that unworkable, but just a
thought. Thanks!

The issue is due to current bpf syscall UAPI. it passed attach_btf_id
and attach_btf_obj_fd to the kernel. If we change UAPI to pass
decl_tag_id instead of attach_btf_id to the kernel, we should be
okay, but this requires a bpf syscall UAPI change. Maybe this works too.


Alan




[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux