On Tue, Nov 29, 2022 at 5:27 PM Hao Luo <haoluo@xxxxxxxxxx> wrote: > > On Tue, Nov 29, 2022 at 4:35 PM Andrii Nakryiko > <andrii.nakryiko@xxxxxxxxx> wrote: > > This is hardly a generic solution, as it requires instrumenting every > > application to do this, right? So what I'm proposing is exactly to > > avoid having each individual application do something special just to > > allow profiling tools to capture build_id. > > I agree. Because the mlock approach is working, we didn't look further > or try improving it. But an upstreamable and generic solution would be > nice. I think Jiri has started looking at it, I am happy to help > there. > Ok, cool, it would be great to have this work reliably and not rely on user-space apps doing something special here. > > Is this due to remapping some binary onto huge pages? > > I think so, but I'm not sure. > We used to have this problem, but then Song added some in-kernel support that we now preserve the original file information. Song, do you mind providing details? > > But regardless, your custom BPF applications can fetch this build_id > > from vm_area_struct->anon_name in pure BPF code, can't it? Why do you > > need to modify in-kernel build_id_parse implementation? > > The user is using bpf_get_stack() to collect stack traces. They don't > implement walking the stack and fetching build_id from vma in their > BPF code. Ah, I see. Let's figure out why Song's approach doesn't work in your case, because this anon_name hack is just that -- hack.