The problem Inlining can cause surprises to tracing users, especially when the tool appears to be working. For example, with [root@ ~]# bpftrace -e 'kprobe:switch_mm {}' Attaching 1 probe... The user may not realize switch_mm() is inlined by leave_mm(), and we are not tracing the code path leave_mm => switch_mm. (This is x86_64, and both functions are in arch/x86/mm/tlb.c.) We have folks working on ideas to create offline tools to detect such issues for critical use cases at compile time. However, I think it is necessary to handle it at program load/attach time. Detect "inlined by some callers" functions This appears to be straightforward in pahole. Something like the following should do the work: diff --git i/btf_encoder.c w/btf_encoder.c index fd040086827e..e546a059eb4b 100644 --- i/btf_encoder.c +++ w/btf_encoder.c @@ -885,6 +885,15 @@ static int32_t btf_encoder__add_func(struct btf_encoder *encoder, struct functio struct llvm_annotation *annot; const char *name; + if (function__inlined(fn)) { + /* This function is inlined by some callers. */ + } + btf_fnproto_id = btf_encoder__add_func_proto(encoder, &fn->proto); name = function__name(fn); btf_fn_id = btf_encoder__add_ref_type(encoder, BTF_KIND_FUNC, btf_fnproto_id, name, false); Mark "inlined by some callers" functions We have a few options to mark these functions. 1. We can set struct btf_type.info.kind_flag for inlined function. Or we can use a bit from info.vlen. 2. We can simply not generate btf for these functions. This is similar to --skip_encoding_btf_inconsistent_proto. Handle tracing inlined functions If we go with option 1 above, we have a few options to handle program load/attach to "inlined by some callers" functions: a) We can reject the load/attach; b) We can rejuct the load/attach, unless the user set a new flag; c) We can simply print a warning, and let the load/attach work. Please share your comments on this. Is this something we want to handle? If so, which of these options makes more sense? Thanks, Song