On Sun, 8 Jul 2007, Brian Downing wrote: > > I think what I'd like is an extra option to repack to limit window > memory usage. This would dynamically scale the window size down if it > can't fit within the limit, then scale it back up once you're off of the > nasty file. This would let me repack my repository with --window=100 > and have it actually finish someday on the machines I have access to. > The big file may not be as efficiently packed as possible, but I can > live with that. > > My question is, is this sane? Does the repack algorithm depend on having > a fixed window size to work? I'd rather not look into implementing this > if it's silly on the face of it. It doesn't sound silly, and it should even be fairly easy. The window code is all in builtin-pack-objects.c (find_deltas()) and while it's currently coded for a constant-sized window, it shouldn't be too hard to free more old entries if you allocate one big one to make sure that the "array" thing doesn't grow to contain too much data. In other words, just look at how the variables "struct unpacked *array" (the whole window array) and the "struct unpacked *n" (the "next entry" in the array using a simple circular queue using "idx") are accessed. Linus - To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html