On 05/06/2014 08:40 PM, Junio C Hamano wrote: > Michael Haggerty <mhagger@xxxxxxxxxxxx> writes: > >> It would be pretty annoying to spend a lot of time fetching a big pack, >> only to have the fetch fail because one reference out of many couldn't >> be updated. This would force the user to download the entire pack >> again,... > > Is that really true? Doesn't quickfetch optimization kick in for > the second fetch? Yes, I guess it would. I wasn't aware of that optimization. Thanks for the pointer. I withdraw my objection to using atomic reference updates for fetch. Michael -- Michael Haggerty mhagger@xxxxxxxxxxxx http://softwareswirl.blogspot.com/ -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html