On Tue, 12 Feb 2008, Jon Smirl wrote: > > In the gcc case I wasn't running out memory. I believe was CPU bound > for an hour processing a single object chain with 2000 entries. That > sure doesn't feel like O(windowsize). Well, there's another - and totally unrelated - issue with *pre-existing* delta chains that are very deep. Namely the fact that since such a deep delta chain will exhaust the delta-cache, you will now have a O(n*chaindepth) behaviour when you unpack the objects (in order to generate the deltas) in the first place! So that really has nothing to do with the new window (or delta) depth at all, just with the _previous_ window depth. See sha1_file.c: MAX_DELTA_CACHE. If you have a 2000-deep delta chain, then the delta-cache should be big enough that you hit in it regularly without flushing it when you traverse down the chain. So MAX_DELTA_CACHE should generally be at _least_ as much as the max delta chain length, which is obviously normally the case (default max delta chain length: 10). We could probably fairly easily make that MAX_DELTA_CACHE be a config option, but right now you have to recompile to test that theory of mine. Or just limit your delta depth to something much smaller (ie ~100 or so) Linus - To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html