On 3/14/07, Junio C Hamano <junkio@xxxxxxx> wrote:
"Santi Béjar" <sbejar@xxxxxxxxx> writes: >> I tried the "NULL fetch between 1000-refs repositories" test, >> which prompted the git-fetch--tool work that was done on >> jc/fetch topic in 'next', with the following versions: >> >> (1) 1.5.0 (without any git-fetch--tool optimization) >> (2) master (ditto) >> (3) master with jc/fetch (but not sb/fetch topic) >> (4) next ((3) plus sb/fetch and others) >> >> The test scripts are at the end of this message. Both (1) and >> (2) take 3 minutes 7 seconds wallclock time. (3) improves it >> down to 15 seconds. (4) makes the operation spend 24 seconds >> (the times are all on my primary machine x86-64 with 1GB, hot >> cache and average of three runs each). > > I think it is not fair,...
[...]
, and you may not like the numbers, but if you call that is "not fair", I do not know what could be considered fair.
I would consider fair the comparison you did not quote, a comparison with the merge logic written in C. I know that (4) is a step backwards in performance as it is now, and I understand that with those numbers the "Split" patch must be reverted. Santi - To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html