One of the primary use cases for using pahole is BTF deduplication during Linux kernel build. That means that DWARF contains more than 5 million types is loaded. So using a hash map with a small number of buckets is quite expensive due to hash collisions. This patch bumps the size of the hash map and reduces overhead of this part of the DWARF loading process. This shaves off about 1 second out of about 20 seconds total for Linux BTF dedup. Signed-off-by: Andrii Nakryiko <andriin@xxxxxx> --- dwarf_loader.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/dwarf_loader.c b/dwarf_loader.c index 63988011978f..05c96bef09e3 100644 --- a/dwarf_loader.c +++ b/dwarf_loader.c @@ -89,7 +89,7 @@ static void dwarf_tag__set_spec(struct dwarf_tag *dtag, dwarf_off_ref spec) *(dwarf_off_ref *)(dtag + 1) = spec; } -#define HASHTAGS__BITS 8 +#define HASHTAGS__BITS 15 #define HASHTAGS__SIZE (1UL << HASHTAGS__BITS) #define obstack_chunk_alloc malloc -- 2.24.1