René Scharfe <l.s.r@xxxxxx> writes: > Logged the sizes of files handed to test_cmp (on macOS). 19170 calls, > median size 42 bytes, average size 617 bytes. 2307 calls with empty > files, 1093 of which in t1092 alone. The two biggest files in t1050, > 2000000 and 2500000 bytes. t9300 in third place with 180333, one > magnitude smaller. > > t1050 at 8a4e8f6a67 (The second batch, 2022-12-26) on Windows: > > Benchmark 1: sh.exe t1050-large.sh > Time (mean ± σ): 18.312 s ± 0.069 s [User: 0.000 s, System: 0.003 s] > Range (min … max): 18.218 s … 18.422 s 10 runs > > ... and with the patch: > > Benchmark 1: sh.exe t1050-large.sh > Time (mean ± σ): 5.709 s ± 0.046 s [User: 0.000 s, System: 0.003 s] > Range (min … max): 5.647 s … 5.787 s 10 runs > > So it works as advertised for big files, but calling an external > program 19000 times takes time as well, which explain the longer > overall test suite duration. > > If we use test_cmp_bin for the two biggest comparisons we get the > same speedup: > > Benchmark 1: sh.exe t1050-large.sh > Time (mean ± σ): 5.719 s ± 0.089 s [User: 0.000 s, System: 0.006 s] > Range (min … max): 5.659 s … 5.960 s 10 runs > > Is this safe? The files consist of X's and Y's at the point of > comparison, so they aren't typical binary files, but they don't > have line endings at all or any user-readable content, so I think > treating them as blobs is appropriate. Nice analysis. If we can use the platform "diff -u" (i.e. we somehow find that it is possible to stop ignoring crlf vs lf difference), then it should give us similarly good performance for large files (but the cost to spawn the tool 19000 times would also be comparable), but we are not there yet, I presume.