Re: [PATCH 0/8] Makefile: make command-list.h 2-5x as fast with -jN

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Oct 20, 2021 at 08:39:51PM +0200, Ævar Arnfjörð Bjarmason wrote:

> This series is based off an off-hand comment I made about making the
> cmdlist.sh faster, in the meantime much of the same methods are
> already cooking in "next" for the "lint-docs" target.
> 
> See 7/8 for the main performance numbers, along the way I stole some
> patches from Johannes Sixt who'd worked on optimizing the script
> before, which compliment this new method of generating this file by
> piggy-backing more on GNU make for managing a dependency graph for us.

I still think this is a much more complicated and error-prone approach
than just making the script faster. I know we can't rely on perl, but
could we use it optimistically?

The proof-of-concept below on top of your patch 6 does two things:

  - observes that there is no need for get_category_line in the loop; it
    is just sorting and de-duping the bitfields, but since we just OR
    them together, neither of those things matters

  - uses perl to open each individual doc file to get the synopsis. It
    _feels_ like this should be something that sed or awk could do, but
    it is beyond me. However, speculatively trying perl is an easy win,
    and we can fall back to the shell loop.

Here are my timings:

Benchmark #1: sh generate-cmdlist.sh command-list.txt
  Time (mean ± σ):      40.4 ms ±  18.1 ms    [User: 44.9 ms, System: 7.1 ms]
  Range (min … max):    20.3 ms …  65.5 ms    10 runs

Benchmark #2: sh generate-cmdlist.sh.old command-list.txt
  Time (mean ± σ):      1.414 s ±  0.038 s    [User: 1.641 s, System: 0.863 s]
  Range (min … max):    1.344 s …  1.451 s    10 runs

Summary
  'sh generate-cmdlist.sh command-list.txt' ran
   34.96 ± 15.66 times faster than 'sh generate-cmdlist.sh.old command-list.txt'

I hate having fallbacks, because the seldom-used version may bitrot. I'm
tempted to just write that loop in C, but there's a circular dependency
with using any of libgit.a (even though it's really only the git
porcelain that cares about command-list.h, it goes into help.o which
goes into libgit.a. We could break that dependency if we wanted,
though). If we can do it in awk, that may be worthwhile.

But either way, I think this is superior to trying to parallelize the
Makefile:

  - it actually uses less CPU, rather than just trying to improve
    wall-clock time by using more cores

  - there's little chance of having some subtle dependency problem

Parallelizing makes a lot of sense to me when the operation is truly
expensive. But in this case it's literally just opening a file, and the
only reason it's slow is because we spawn a ton of processes to do it.

---
diff --git a/generate-cmdlist.sh b/generate-cmdlist.sh
index a1ab2b1f07..f922eebe23 100755
--- a/generate-cmdlist.sh
+++ b/generate-cmdlist.sh
@@ -63,11 +63,23 @@ define_category_names () {
 print_command_list () {
 	echo "static struct cmdname_help command_list[] = {"
 
+	# try perl first, as we can do it all in one process
+	command_list "$1" |
+	perl -ne '
+		my ($cmd, @rest) = split;
+		open(my $fh, "<", "Documentation/$cmd.txt");
+		while (<$fh>) {
+			next unless /^$cmd - (.*)/;
+			print "{ \"$cmd\", N_(\"$1\"), 0";
+			print " | CAT_$_" for (@rest);
+			print " },\n";
+		}
+	' ||
 	command_list "$1" |
 	while read cmd rest
 	do
 		printf "	{ \"$cmd\", $(get_synopsis $cmd), 0"
-		printf " | CAT_%s" $(echo "$rest" | get_category_line)
+		printf " | CAT_%s" $rest
 		echo " },"
 	done
 	echo "};"



[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]

  Powered by Linux