One possible option for resumable clones that has been discussed is letting the server point the client by http to a static bundle containing most of history, followed by a fetch from the actual git repo (which should be much cheaper now that we have all of the bundled history). This series implements "step 0" of this plan: just letting bundles be fetched across the network in the first place. Shawn raised some issues about using bundles for this (as opposed to accessing the packfiles themselves); specifically, this raises the I/O footprint of a repository that has to serve both the bundled version of the pack and the regular packfile. So it may be that we don't follow this plan all the way through. However, even if we don't, fetching bundles over http is still a useful thing to be able to do. Which makes this first step worth doing either way. [01/14]: t/lib-httpd: check for NO_CURL [02/14]: http: turn off curl signals [03/14]: http: refactor http_request function [04/14]: http: add a public function for arbitrary-callback request [05/14]: remote-curl: use http callback for requesting refs [06/14]: transport: factor out bundle to ref list conversion [07/14]: bundle: add is_bundle_buf helper [08/14]: remote-curl: free "discovery" object [09/14]: remote-curl: auto-detect bundles when fetching refs [10/14]: remote-curl: try base $URL after $URL/info/refs [11/14]: progress: allow pure-throughput progress meters [12/14]: remote-curl: show progress for bundle downloads [13/14]: remote-curl: resume interrupted bundle transfers [14/14]: clone: give advice on how to resume a failed clone -Peff -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html