Re: [PATCH 4/7] commit-graph: fix bogus counter in "Scanning merged commits" progress line

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sat, Jun 26 2021, René Scharfe wrote:

> Am 26.06.21 um 16:11 schrieb Ævar Arnfjörð Bjarmason:
>> [...]
>> To reference my earlier E-Mail[1] are you eating the first apple or the
>> zeroeth apple? I don't think one is more or less right in the
>> mathematical sense, I just think for UX aimed at people counting "laps"
>> makes more sense than counting completed items.
>
> The difference between counting iterations and work items vanishes as
> their numbers increase.  The most pronounced difference is observed when
> there is only a single item of work.  The count-iterations model shows
> 1/1 from start to finish.  The count-work model shows 0/1 initially and
> 1/1 after the work is done.
>
> As a user I prefer the second one.  If presented with just a number and
> a percentage then I assume 100% means all work is done and would cancel
> the program if that status is shown for too long.  With Git I have
> learned that only the final ", done" really means done in some cases,
> but that's an unnecessary lesson and still surprising to me.

What progress bar of ours goes slow enough that the difference matters
for you in either case?

The only one I know of is "Enumerating objects", which notably stalls at
the start, and which I'm proposing changing the output of in:
https://lore.kernel.org/git/patch-18.25-e21fc66623f-20210623T155626Z-avarab@xxxxxxxxx/

>> [...]
>> Which is one of the two reasons I prefer this pattern, i.e. this is less
>> verbose:
>>
>>     start_progress()
>>     for i in (0..X-1):
>>         display_progress(i+1)
>>         work()
>>     stop_progress()
>>
>> Than one of these, which AFAICT would be your recommendation:
>>
>>     # Simplest, but stalls on work()
>>     start_progress()
>>     for i in (0..X-1):
>>         work()
>>         display_progress(i+1)
>>     stop_progress()
>>
>>     # More verbose, but doesn't:
>>     start_progress()
>>     for i in (0..X-1):
>>         display_progress(i)
>>         work()
>>         display_progress(i+1)
>>     stop_progress()
>>
>>     # Ditto:
>>     start_progress()
>>     display_progress(0)
>>     for i in (0..X-1):
>>         work()
>>         display_progress(i+1)
>>     stop_progress()
>>
>> And of course if your loop continues or whatever you'll need a last
>> "display_progress(X)" before the "stop_progress()".
>
> The count-work model needs one more progress update than the
> count-iteration model.  We could do all updates in the loop header,
> which is evaluated just the right number of times.  But I think that we
> rather should choose between the models based on their results.

I think we should be more biased towards API convenience here than
anything else, because for most of these they'll go so fast that users
won't see the difference. I just also happen to think that the easy way
to do it is also more correct.

Also, because for these cases that you're focusing on where we count up
to exactly 100% and we therefore expect N calls to display_progress()
(igroning the rare but allowed duplicate calls with the same number,
which most callers don't use). We could just have a convenience API of:

    start_progress()
    for i in (0..X-1):
        progress_update() /* passing "i" not needed, we increment internally */
        work()
    stop_progress()

Then we could even make showing 0/N or 1/N the first time configuable,
but we could only do both if we use the API as I'm suggesting, not as
you want to use it.

You also sort of can get me what I want with with what you're
suggesting, but you'd conflate "setup" work with the first item, which
matters e.g. for "Enumerating objects" and my "stalled" patch. It's also
more verbose at the code level, and complex (need to deal with "break",
"continue"), so why would you?

Which I think is the main point of our not so much disagreement but I
think a bit of talking past one another.

I.e. I think you're narrowly focused on what I think of as a display
issue of the current progress bars we show, I'm mainly interested in how
we use the API, and we should pick a way to use it that allows us to do
more with displaying progress better in the future.

> If each work item finishes within a progress display update period
> (half a second) then there won't be any user-visible difference and
> both models would do.

A trivial point, but don't you mean a second? AFAICT for "delayed" we
display after 2 seconds, then update every 1 seconds, it's only if we
have display_throughput() that we do every 0.5s.

>> The other is that if you count laps you can have your progress bar
>> optionally show progress on that item. E.g. we could if we stall show
>> seconds spend that we're hung on that item, or '3/3 ETA 40s". I have a
>> patch[3] that takes an initial step towards that, with some more queued
>> locally.
>
> A time estimate for the whole operation (until ", done") would be nice.
> It can help with the decision to go for a break or to keep staring at
> the screen.  I guess we just need to remember when start_progress() was
> called and can then estimate the remaining time once the first item is
> done.  Stalling items would push the estimate further into the future.
>
> A time estimate per item wouldn't help me much.  I'd have to subtract
> to get the number of unfinished items, catch the maximum estimated
> duration and multiply those values.  OK, by the time I manage that Git
> is probably done -- but I'd rather like to leave arithmetic tasks to
> the computer..
>
> Seconds spent for the current item can be shown with both models.  The
> progress value is not sufficient to identify the problem case in most
> cases.  An ID of some kind (e.g. a file name or hash) would have to be
> shown as well for that.  But how would I use that information?

If we're spending enough time on one item to update progress for it N
times we probably want to show throughput/progress/ETA mainly for that
item, not the work as a whole.

If we do run into those cases and want to convert them to show some
intra-item progress we'd need to first migrate them over to suggested
way of using the API if we picked yours first, with my suggested use we
only need to add new API calls (display_throughput(), and similar future
calls/implicit display).

Consider e.g. using the packfile-uri response to ask the user to
download N number of URLs, just because we grab one at 1MB/s that
probably won't do much to inform our estimate of the next one (which may
be on a different CDN etc.).

The throughput API was intended (and mainly used) for the estimate for
the whole batch, I just wonder if as we use it more widely whether that
use-case won't be the exception.




[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]

  Powered by Linux