Hi Junio, On Sat, 21 May 2022, Junio C Hamano wrote: > Johannes Schindelin <Johannes.Schindelin@xxxxxx> writes: > > >> * print the verbose logs only for the failed test cases (to massively cut > >> down on the size of the log, particularly when there's only a couple > >> failures in a test file with a lot of passing tests). > > > > That's an amazingly simple trick to improve the speed by a ton, indeed. > > Thank you for this splendid idea! > > > >> * skip printing the full text of the test in 'finalize_test_case_output' > >> when creating the group, i.e., use '$1' instead of '$*' (in both passing > >> and failing tests, this information is already printed via some other > >> means). > >> > >> If you wanted to make sure a user could still access the full failure logs > >> (i.e., including the "ok" test results), you could print a link to the > >> artifacts page as well - that way, all of the information we currently > >> provide to users can still be found somewhere. > > > > That's a good point, I added that hint to the output (the link is > > unfortunately not available at the time we print that advice). > > https://github.com/git/git/runs/6539786128 shows that all in-flight > topics merged to 'seen', except for the ds/bundle-uri-more, passes > the linux-leaks job. The ds/bundle-uri-more topic introduces some > leaks to commands that happen to be used in tests that are marked as > leak-checker clean, making the job fail. > > Which makes a great guinea pig for the CI output improvement topic. > > So, I created two variants of 'seen' with this linux-leaks breakage. > One is with the js/ci-github-workflow-markup topic on this thread. > The other one is with the ab/ci-github-workflow-markup topic (which > uses a preliminary clean-up ab/ci-setup-simplify topic as its base). > They should show the identical test results and failures. > > And here are their output: > > - https://github.com/git/git/runs/6539835065 I see that this is still with the previous iteration, and therefore exposes the same speed (or slowness) as was investigated so wonderfully by Victoria. So I really do not understand why you pointed to that run, given that it still contains all the successful test cases' logs, which contributes in a major way to said slowness. Maybe you meant to refer to https://github.com/git/git/runs/6540394142 instead, which at least for me loads much faster _and_ makes the output as helpful as my intention was? Ciao, Dscho > - https://github.com/git/git/runs/6539900608 > > If I recall correctly, the selling point of the ab/* variant over > js/* variant was that it would give quicker UI response compared to > the former, but other than that, both variants' UI are supposed to > be as newbie friendly as the other. > > When I tried the former, it reacted too poorly to my attempt to > scroll (with mouse scroll wheel, if it makes a difference) that > sometimes I was staring a blank dark-gray space for a few seconds > waiting for it to be filled by something, which was a bit jarring > experience. When I tried the latter, it didn't show anything to > help diagnosing the details of the breakage in "run make test" step > and the user needed to know "print test failures" needs to be looked > at, which I am not sure is an inherent limitation of the approach. > After the single extra click, navigating the test output to find the > failed steps among many others that succeeded was not a very pleasant > experience. > > Those who are interested in UX experiment may want to visit these > two output to see how usable each of these is for themselves. > > Thanks. > > > > >