Re: [PATCH 1/3] commit-reach: use one walk in remove_redundant()

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 1/31/2021 3:25 PM, Junio C Hamano wrote:
> Derrick Stolee <stolee@xxxxxxxxx> writes:
> 
>>> So, the inner loop makes sure we won't revisit STALE parent, but
>>> keep digging parents we haven't seen, and stop when the generation
>>> is old enough.  What happens when there is no generation number
>>> computed yet, I wonder...  We'll keep getting infinity and dig all
>>> the way down to root?
>>
>> If we are on commits that have no generation number yet, then we
>> will walk until reaching commits in the commit-graph file that have
>> a computed generation (or in the heuristic case, when we have reached
>> all but one of the commits).
>>
>> In the case of the commit-graph, all commits will have generation
>> number "infinity". In such a case, perhaps the old algorithm _is_
>> the best we can do, at least for now.
> 
> Hmph, I am afraid that such is life X-<.
> 
>> One way to ensure we do not regress from the current behavior
>> would be to condition the new algorithm with
>>
>> 	if (generation_numbers_enabled(the_repository))
>> 		new_algorithm();
>> 	else
>> 		old_algorithm();
>>
>> much like in repo_is_descendant_of().
>>
>> Is that a good plan?
> 
> It would certainly avoid one particular form of regression, so it is
> better than nothing.
> 
> But at the same time, we'd probably want to encourage people to
> enable and maintain generation numbers for the majority of commits
> in their repository, but unless you "gc" twice a day or something,
> you'd inevitably have a mixture, say, all commits that are more than
> two weeks old are covered by commit-graph, but more recent ones are
> not yet enumerated, and you have to traverse at runtime.
> 
> And the performance characteristics we would care the most in the
> longer term is to make sure that we perform well in such a mixed
> environment for the parts of the history that are not old enough.

You're right, that in the mixed case of "all of these input commits
have infinite generation number" is the worst possible case for
this algorithm. However, if even a single commit has a finite
generation number, then this new algorithm should out-perform the
one using paint_down_to_common() in all cases.

I tested the painful case of

	git merge-base --independent 2ecedd756908 d2360a398f0b

in the Linux kernel repository with different commit-graph files.
I found that the new algorithm performed worse than the old
algorithm until there were fewer than 2,500 commits reachable
from these two but not in the commit-graph file. That's probably
too small of a gap to expect of a typical user.

I will modify my check

	if (generation_numbers_enabled(r)) {
		int i;

		/*
		 * If we have a single commit with finite generation
		 * number, then the _with_gen algorithm is preferred.
		 */
		for (i = 0; i < cnt; i++) {
			if (commit_graph_generation(array[i]) < GENERATION_NUMBER_INFINITY)
				return remove_redundant_with_gen(r, array, cnt);
		}
	}

	return remove_redundant_no_gen(r, array, cnt);

> Many things can be sped up by precomputing and storing the result in
> the commit-graph file and that is not all that interesting or
> surprising part of the story, I would think.  Rather, we want to
> ensure that we do not perform on the youngest part of the history
> any worse---that way, people will have strong incentive to enable
> commit-graph, as things will work superbly for older parts of the
> history, while not doing any worse than the original system for the
> newest parts of the history.

I'm definitely biased to a case where there is an up-to-date
commit-graph file, since I care most about server-side performance
or users with background maintenance enabled (and computing the
commit-graph frequently). You are right to call out that that is
not always the case.
 
> There was a side thread where somebody wished if they can remove
> support for all the codepaths that do not use commit-graph, but
> would this be an example of how such a wish is impractical, I have
> to wonder?

Some cases might allow de-duplication, such as
sort_in_topological_order(), that would have roughly the same
performance, give or take some overhead. It takes time to check
and see if that is the case, however.

Thanks,
-Stolee



[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]

  Powered by Linux