On Thu, 2024-11-28 at 01:24 +0000, Ihor Solodrai wrote: > Change multithreaded implementation of BTF encoding: > > * Use a single btf_encoder accumulating BTF for all compilation units > * Make BTF encoding routine exclusive: only one thread at a time may > execute btf_encoder__encode_cu > * Introduce CU ids: an id is an index of a CU, in order they are > created in dwarf_loader.c > * Introduce CU__PROCESSED cu_state to inidicate what CUs have been > processed by the encoder > * Enforce encoding order of compilation units (struct cu) loaded > from DWARF by utilizing global struct cus as a queue > * reproducible_build option is now moot: BTF encoding is always > reproducible with this change > * Most of the code that merged the results of multiple BTF encoders > into one BTF after CU processing is removed > > Motivation behind this change and analysis that led to it are in the > cover letter to the patch series. > > In short, this implementation of BTF encoding makes it reproducible > without sacrificing the performance gains from parallel > processing. The speed in terms of wall-clock time is comparable to > non-reproducible runs on pahole/next [1]. The memory footprint is > lower with increased number of threads. > > pahole/next (12ca112): > > Performance counter stats for '/home/theihor/dev/dwarves/build/pahole -J -j24 --btf_features=encode_force,var,float,enum64,decl_tag,type_tag,optimized_func,consistent_func,decl_tag_kfuncs --btf_encode_detached=/dev/null --lang_exclude=rust /home/theihor/git/kernel.org/bpf-next/kbuild-output/.tmp_vmlinux1' (13 runs): > > 50,493,244,369 cycles ( +- 0.26% ) > > 1.6863 +- 0.0150 seconds time elapsed ( +- 0.89% ) > > jobs 1, mem 546556 Kb, time 4.53 sec > jobs 2, mem 599776 Kb, time 2.81 sec > jobs 4, mem 661756 Kb, time 2.05 sec > jobs 8, mem 764584 Kb, time 1.58 sec > jobs 16, mem 844856 Kb, time 1.59 sec > jobs 32, mem 1047880 Kb, time 1.69 sec > > This patchset on top of pahole/next: > > Performance counter stats for '/home/theihor/dev/dwarves/build/pahole -J -j24 --btf_features=encode_force,var,float,enum64,decl_tag,type_tag,optimized_func,consistent_func,decl_tag_kfuncs --btf_encode_detached=/dev/null --lang_exclude=rust /home/theihor/git/kernel.org/bpf-next/kbuild-output/.tmp_vmlinux1' (13 runs): > > 31,175,635,417 cycles ( +- 0.22% ) > > 1.58644 +- 0.00501 seconds time elapsed ( +- 0.32% ) > > jobs 1, mem 544780 Kb, time 4.47 sec > jobs 2, mem 553944 Kb, time 4.68 sec > jobs 4, mem 563352 Kb, time 2.36 sec > jobs 8, mem 585508 Kb, time 1.73 sec > jobs 16, mem 635212 Kb, time 1.61 sec > jobs 32, mem 772752 Kb, time 1.59 sec > > [1]: https://git.kernel.org/pub/scm/devel/pahole/pahole.git/commit/?h=next&id=12ca11281912c272f931e836b9160ee827250716 > > Signed-off-by: Ihor Solodrai <ihor.solodrai@xxxxx> > --- I think this is a solid idea and a good observation, but implementation inherits unnecessary complexity from the previous design. There is no real need to keep single-threaded and multi-threaded modes separate, instead: - main thread can serve as a dedicated "collector" thread, waiting sequentially for CUs with ids ranging from 0 to number of cus; - configurable number of worker threads can parse DWARF concurrently and put CU objects to the processing queue; - the queue size has to be bounded to keep memory consumption within certain limits (but be careful, a simple bounded queue protected by semaphore won't do, as the queue might get fully filled with ids different from expected, in case when first CU takes a very long time to process and N CUs after it take very short time to process). [...]