On Sun, 11 Jun 2006, Jon Smirl wrote: > > I have it stopped and I am running the repack. > There are 1.27M files in my .git directory Yeah, that would do it. That's ~5000 files per object directory, so I assume that your directories are 200+kB in size, and for every new object added, you'll basically have to traverse the old directory fully in order to find an empty place for it (and without hashing, you'll traverse it _twice_ - first to look for it, then to look for the empty space). Btw, after repacking, if it still has lots of lose objects, and you still have several directories that are huge (because there are pending objects for a commit that didn't happen yet when you ^Z'd the svnimport), you'll literally get better performance if you do something like for i in ?? do cp -r $i $i.new rm -rf $i mv $i.new $i done in your .git/objects/ directory (CAREFUL! Any script that does "rm -rf" should be double- and triple-checked for sanity! ;) That should make sure that you don't still have huge directories. (And yes, this is a real problem at least with ext3). The git cvsimporter ends up repacking the archive every thousand commits. That's just a random number, but it's indicative of what we did there to handle large imports. I don't think anybody has done a large import using the git-svnimport before, so you're in new territory which explains some of the teething problems. Linus - : send the line "unsubscribe git" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html