Re: [PATCH 0/9] ci: make Git's GitHub workflow output much more helpful

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Feb 23 2022, Phillip Wood wrote:

> On 22/02/2022 13:31, Ævar Arnfjörð Bjarmason wrote:
>> [...]
>> So just to make the point about one of those mentioned in my [1] with
>> some further details (I won't go into the whole thing to avoid repeating
>> myself):
>> I opened both of:
>>      https://github.com/git-for-windows/git/runs/4822802185?check_suite_focus=true
>>      https://github.com/dscho/git/runs/4840190622?check_suite_focus=true
>> Just now in Firefox 91.5.0esr-1. Both having been opened before, so
>> they're in cache, and I've got a current 40MB/s real downlink speed etc.
>> The former fully loads in around 5100ms, with your series here
>> that's
>> just short of 18000ms.
>> So your CI changes are making the common case of just looking at a
>> CI
>> failure more than **3x as slow as before**.
>
> I don't think that is the most useful comparison between the two.[...]

I'm not saying that it's the most useful comparison between the two, but
that there's a major performance regression introduced in this series
that so far isn't addressed or noted.

> [...]When
> I am investigating a test failure the time that matters to me is the
> time it takes to display the output of the failing test case. With the
> first link above the initial page load is faster but to get to the
> output of the failing test case I have click on "Run
> ci/print_test_failures.sh" then wait for that to load and then search
> for "not ok" to actually get to the information I'm after. With the
> second link the initial page load does feel slower but then I'm
> presented  with the test failures nicely highlighted in red, all I
> have to do is click on one and I've got the information I'm
> after. Overall that is much faster and easier to use.

Whether you think the regression is worth the end result is a subjective
judgement. I don't think it is, but I don't think you or anyone else is
wrong if they don't agree.

If you think it's OK to spend ~20s instead of ~5s on rendering the new
output that's something that clearly depends on how much you value the
new output, and much much you're willing to wait.

What I am saying, and what I hope you'll agree with, is that it's
something that should be addressed in some way by this series.

One way to do that would be to note the performence regression in a
commit message, and argue that despite the slowdown it's worth it.




[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]

  Powered by Linux