On Thu, Nov 05, 2020 at 10:24:19AM -0600, Justin Forbes wrote: SNIP > > > > hi, > > > > what gcc are you using? I just updated gcc to: > > > > > > > > [jolsa@dell-r440-01 ~]$ gcc --version > > > > gcc (GCC) 10.2.1 20201016 (Red Hat 10.2.1-6) > > > > > > > > and I'm able to build linux 5.10-rc1 again with CONFIG_DEBUG_INFO_BTF=y > > > > > > > > > > gcc --version > > > gcc (GCC) 10.2.1 20201016 (Red Hat 10.2.1-6) > > > > > > LINK resolve_btfids > > > FAILED unresolved symbol vfs_getattr > > > make: *** [Makefile:1169: vmlinux] Error 255 > > > error: Bad exit status from /home/jmforbes/rpmbuild/tmp/rpm-tmp.7tWIJ4 (%build) > > > > ugh sry.. I have local pahole workaround/fix and it sneaked in :-\ > > it's still broken > > So, this clearly got triggered by a change in the upstream kernel, but > are you saying that this is in fact a pahole bug that something > upstream triggered? Where is the correct fix here? It would be nice > to get CONFIG_DEBUG_INFO_BTF back on, one way or another. so with CONFIG_DEBUG_INFO_BTF enabled vmlinux linking has 2 extra steps: BTF .btf.vmlinux.bin.o -> pahole generates BTF data in vmlinux BTFIDS vmlinux -> resolve_btfids uses BTF and change vmlinux with the DWARF bug pahole generates corrupted data and resolve_btfids fails because of that with: FAILED unresolved symbol vfs_getattr we are in the process of 'fixing' pahole not to depend on DWARF that much, which workarounds the current DWARF issue v3 was posted, seems like v4 could make it in: https://lore.kernel.org/bpf/20201104215923.4000229-1-jolsa@xxxxxxxxxx/T/ it's change for pahole and also for kernel, so it will take some time to be packaged jirka _______________________________________________ kernel mailing list -- kernel@xxxxxxxxxxxxxxxxxxxxxxx To unsubscribe send an email to kernel-leave@xxxxxxxxxxxxxxxxxxxxxxx Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/kernel@xxxxxxxxxxxxxxxxxxxxxxx