On Friday, September 30, 2011 03:02:30 pm Martin Fick wrote: > On Friday, September 30, 2011 10:41:13 am Martin Fick wrote: > Since a full sync is now done to about 5mins, I broke > down the output a bit. It appears that the longest part > (2:45m) is now the time spent scrolling though each > change still. Each one of these takes about 2ms: > * [new branch] refs/changes/99/71199/1 -> > refs/changes/99/71199/1 > > Seems fast, but at about 80K... So, are there any obvious > N loops over the refs happening inside each of of the > [new branch] iterations? OK, I narrowed it down I believe. If I comment out the invalidate_cached_refs() line in write_ref_sha1(), it speeds through this section. I guess this makes sense, we invalidate the cache and have to rebuild it after every new ref is added? Perhaps a simple fix would be to move the invalidation right after all the refs are updated? Maybe write_ref_sha1 could take in a flag to tell it to not invalidate the cache so that during iterative updates it could be disabled and then run manually after the update? -Martin -- Employee of Qualcomm Innovation Center, Inc. which is a member of Code Aurora Forum -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html