On 21. 08. 24, 7:32, Jiri Slaby wrote:
On 20. 08. 24, 16:33, Jiri Olsa wrote:
On Tue, Aug 20, 2024 at 10:59:50AM +0200, Jiri Slaby (SUSE) wrote:
From: Jiri Slaby <jslaby@xxxxxxx>
== WARNING ==
This is only a PoC. There are deficiencies like CROSS_COMPILE or LLVM
are completely unhandled.
The simple version is just do there:
ifeq ($(CONFIG_64BIT,y)
but it has its own deficiencies, of course.
So any ideas, inputs?
== WARNING ==
When pahole is run with -j on 32bit userspace (32bit pahole in
particular), it randomly fails with OOM:
btf_encoder__tag_kfuncs: Failed to get ELF section(62) data: out of
memory.
btf_encoder__encode: failed to tag kfuncs!
or simply SIGSEGV (failed to allocate the btf encoder).
It very depends on how many threads are created.
So do not invoke pahole with -j on 32bit.
could you share more details about your setup?
does it need to run on pure 32bit to reproduce?
armv7l builds are 32bit only.
I can't reproduce when
doing cross build and running 32 bit pahole on x86_64..
i586 is built using 64bit kernel. It is enough to have 32bit userspace.
As written in the linked bug:
https://bugzilla.suse.com/show_bug.cgi?id=1229450#c6
FWIW, steps to reproduce locally:
docker pull jirislaby/pahole_crash
docker run -it jirislaby/pahole_crash
The VM space of pahole is exhausted:
process map: https://bugzilla.suse.com/attachment.cgi?id=876821
strace of mmaps: https://bugzilla.suse.com/attachment.cgi?id=876822
You need to run with large enough -j on a fast machine. Note that this
happens on build hosts even with -j4, but they are under heavy load, so
parallelism of held memory is high.
From https://bugzilla.suse.com/show_bug.cgi?id=1229450#c20:
Run on 64bit:
pahole -j32 -> 4.102 GB
pahole -j16 -> 3.895 GB
pahole -j1 -> 3.706 GB
On 32bit (the same vmlinux):
pahole -j32 -> 2.870 GB (crash)
pahole -j16 -> 2.810 GB
pahole -j1 -> 2.444 GB
Look there for full massif report.
So now I think we should disable BTF generation with 32bit pahole
completely. Or someone debugs it and improves debug info loading to not
eat that much.
thanks,
--
js
suse labs