Jeff King <peff@xxxxxxxx> wrote: > On Wed, Jun 03, 2009 at 12:15:55PM -0700, Shawn O. Pearce wrote: > > > What we could do is try to organize the fetch queue by object type, > > get all commits, then all trees, then blobs. The blobs are the > > bulk of the data, and by the time we hit them, we should be able > > to give some estimate on progress because we have all of the ones > > we need to fetch in our fetch queue. But its only a "object count" > > sort of thing, not a byte count. > > That's clever, and I think an "object count" would be fine (after all, > that is all that git:// fetching provides). However, I'm not sure how it > would work in practice. When we follow a walk to a commit in a pack, do > we really want to try to pull _just_ that commit? No, we pull the whole pack. So the progress meter would have to switch to do a content-length thing for the pack pull, then go back to the object queue. If that means we just pulled *all* of the blobs we have queued up, great, we can probably actually whack them out of the queue once the pack is down. Actually, that's really smart to do, because then we don't build up a massive list of objects when cloning a very large repository like Gentoo. By delaying trees/blobs, I meant delaying them for loose object fetch only, not pack based fetch. -- Shawn. -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html