On Wed, May 10, 2017 at 03:11:57PM -0700, Jonathan Tan wrote: > After looking at ways to solve jrnieder's performance concerns, if we're > going to need to manage one more item of state within the function, I > might as well use my earlier idea of storing unmatched refs in its own > list instead of immediately freeing them. This version of the patch > should have much better performance characteristics. Hrm. So the problem in your original was that the loop became quadratic in the number of refs when fetching all of them (because the original relies on the sorting to essentially do a list-merge). Are there any quadratic bits left? > @@ -649,6 +652,25 @@ static void filter_refs(struct fetch_pack_args *args, > > if ((allow_unadvertised_object_request & > (ALLOW_TIP_SHA1 | ALLOW_REACHABLE_SHA1))) { > + can_append = 1; > + } else { > + struct ref *u; > + /* Check all refs, including those already matched */ > + for (u = unmatched; u; u = u->next) { > + if (!oidcmp(&ref->old_oid, &u->old_oid)) { > + can_append = 1; > + goto can_append; > + } > + } This is inside the nr_sought loop. So if I were to do: git fetch origin $(git ls-remote origin | awk '{print $1}') we're back to being quadratic. I realize that's probably a silly thing to do, but in the general case, you're O(m*n), where "n" is number of unmatched remote refs and "m" is the number of SHA-1s you're looking for. Doing better would require either sorting both lists, or storing the oids in something that has better than linear-time lookup. Perhaps a sha1_array or an oidset? We don't actually need to know anything about the unmatched refs after the first loop. We just need the list of oids that let us do can_append. AIUI, you could also avoid creating the unmatched list entirely when the server advertises tip/reachable sha1s. That's a small optimization, but I think it may actually make the logic clearer. -Peff