On 8/5/06, Martin Langhoff <martin.langhoff@xxxxxxxxx> wrote:
On 8/5/06, Jon Smirl <jonsmirl@xxxxxxxxx> wrote: > On 8/4/06, Linus Torvalds <torvalds@xxxxxxxx> wrote: > > and you're basically all done. The above would turn each *,v file into a > > *-<sha>.pack/*-<sha>.idx file pair, so you'd have exactly as many > > pack-files as you have *,v files. > > I'll end up with 110,000 pack files. Then just do it every 100 files, and you'll only have 1,100 pack files, and it'll be fine.
This is something that has to be tuned. If you wait too long everything spills out of RAM and you go totally IO bound for days. If you do it too often you end up with too many packs and it takes a day to repack them. If I had a way to pipe the all of the objects into repack one at a time without repack doing multiple passes none of this tuning would be necessary. In this model the standalone objects never get created in the first place. The fastest IO is IO that has been eliminated.
> I suspect when I run repack over > that it is going to take 24hrs or more, Probably, but only the initial import has to incur that huge cost.
Mozilla developers aren't all rushing to switch to git. A switch needs to be as painless as possible. If things are too complex they simply won't switch. Switching Mozilla to git is going to require a sales job and proof that the tools are reliable and better than CVS. Right now I can't even reliably import Mozilla CVS. One of the conditions for even considering git is that they can easily do the CVS import internally and verify it for accuracy. -- Jon Smirl jonsmirl@xxxxxxxxx - : send the line "unsubscribe git" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html