On 03/06/2015 11:59 PM, Jeff King wrote: > On Fri, Mar 06, 2015 at 05:48:39PM +0100, Ævar Arnfjörð Bjarmason wrote: > >> The --prune option to fetch added in v1.6.5-8-gf360d84 seems to be >> around 20-30x slower than the equivalent operation with git remote >> prune. I'm wondering if I'm missing something and fetch does something >> more, but it doesn't seem so. > > [...] > We spend a lot of time checking refs here. Probably this comes from > writing the `packed-refs` file out 1000 times in your example, because > fetch handles each ref individually. Whereas since c9e768b (remote: > repack packed-refs once when deleting multiple refs, 2014-05-23), > git-remote does it in one pass. > > Now that we have ref_transaction_*, I think if git-fetch fed all of the > deletes (along with the updates) into a single transaction, we would get > the same optimization for free. Maybe that is even part of some of the > pending ref_transaction work from Stefan or Michael (both cc'd). I > haven't kept up very well with what is cooking in pu. I am looking into this now. For pruning, we can't use a ref_transaction as it is currently implemented because it would fail if any of the reference deletions failed. But in this case I think if any deletions fail, we would prefer to emit a warning but keep going. I'm trying to decide whether to have a separate function in the refs API to "delete as many of the following refs as possible", or whether to add a flag to ref_transaction_update() that says "try this update, but don't fail the transaction if it fails". The latter would probably be more work because we would need to invent a way to return multiple error messages from a single transaction. Michael -- Michael Haggerty mhagger@xxxxxxxxxxxx -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html