On Mon, 24 Aug 2009, Daniel Barkalow wrote:
On Sun, 23 Aug 2009, Junio C Hamano wrote:
But it makes me wonder if this logic to filter refs is buying us anything.
for (rm = refs; rm; rm = rm->next) {
+ nr_refs++;
if (rm->peer_ref &&
!hashcmp(rm->peer_ref->old_sha1, rm->old_sha1))
continue;
ALLOC_GROW(heads, nr_heads + 1, nr_alloc);
heads[nr_heads++] = rm;
}
What is the point of not asking for the refs that we know are the same?
This code is part of the original C implementation of fetch; I suspect the
optimization was somehow in the shell version and made sense there,
perhaps because there wasn't a quickfetch in the shell version or that
there was some non-negligable per-ref cost in the code around there, since
it was calling helper programs and such.
I don't remember copying it from the shell version but my memory is
terrible, so I could easily be wrong. The relevant commit message was:
"git-fetch2: remove ref_maps that are not interesting
Once we have the full list of ref_maps, remove any where the local
and remote sha1s are the same - as we don't need to do anything for
them."
So that doesn't help. I was very concerned about performance though
(which was why I wanted fetch in C in the first place), so may have added
it to speed up fetches that have only updated a few refs - and I assume
that quickfetch was something that came along later after you absorbed the
work into the transport series?
Anyway, I think it makes sense to remove the filtering from
transport_fetch_refs(), like your patch does.
If it makes a difference for the actual protocol, fetch_refs_via_pack()
could filter them at that stage.
I think it would certainly be worth investigating the performance aspects
... no time tonight, but maybe tomorrow.
--
Julian
---
Some circumstantial evidence is very strong, as when you find a trout in
the milk.
-- Thoreau
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html