Re: [PATCH v4 00/10] The final building block for a faster rebase -i

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, May 29, 2017 at 12:51 PM, Johannes Schindelin
<Johannes.Schindelin@xxxxxx> wrote:
> Hi René,
>
> On Sat, 27 May 2017, René Scharfe wrote:
>
>> Am 26.05.2017 um 05:15 schrieb Liam Beguin:
>> > I tried to time the execution on an interactive rebase (on Linux) but
>> > I did not notice a significant change in speed.
>> > Do we have a way to measure performance / speed changes between version?
>>
>> Well, there's performance test script p3404-rebase-interactive.sh.  You
>> could run it e.g. like this:
>>
>>       $ (cd t/perf && ./run origin/master HEAD ./p3404*.sh)
>>
>> This would compare the performance of master with the current branch
>> you're on.  The results of p3404 are quite noisy for me on master,
>> though (saw 15% difference between runs without any code changes), so
>> take them with a bag of salt.
>
> Indeed. Our performance tests are simply not very meaningful.
>
> Part of it is the use of shell scripting (which defeats performance
> testing pretty well),

Don't the performance tests take long enough that the shellscripting
overhead gets lost in the noise? E.g. on Windows what do you get when
you run this in t/perf:

    $ GIT_PERF_REPEAT_COUNT=3 GIT_PERF_MAKE_OPTS="-j6 NO_OPENSSL=Y
BLK_SHA1=Y CFLAGS=-O3" ./run v2.10.0 v2.12.0 v2.13.0 p3400-rebase.sh

I get split-index performance improving by 28% in 2.12 and 58% in
2.13, small error bars even with just 3 runs. This is on Linux, but my
sense of fork overhead on Windows is that it isn't so bad as to matter
here.

I'd also be interested to see what sort of results you get for my
"grep: add support for the PCRE v1 JIT API" patch which is in pu now,
assuming you have a PCRE newer than 8.32 or so.

> another part is that we have no performance testing
> experts among us, and failed to attract any, so not only is the repeat
> count ridiculously small, we also have no graphing worth speaking of (and
> therefore it is impossible to even see trends, which is a rather important
> visual way to verify sound performance testing).
>
> Frankly, I have no illusion about this getting fixed, ever.

I have a project on my TODO that I've been meaning to get to which
would address this. I'd be interested to know what people think about
the design:

* Run the perf tests in some more where the raw runtimes are saved away
* Have some way to dump a static html page from that with graphs over
time (with gnuplot svg?)
* Supply some config file to drive this, so you can e.g. run each
tests N times against your repo X for the last 10 versions of git.
* Since it's static HTML it would be trivial for anyone to share such
results, and e.g. setup running them in cron to regularly publish to
github pages.

> So yes, in the meantime we need to use those numbers with a considerable
> amount of skepticism.

...however, while the presentation could be improved, I've seen no
reason to think that the underlying numbers are suspect, or that the
perf framework needs to be rewritten as opposed to improved upon. If
you don't think so I'd like to know why.




[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]