On Sat, Apr 17, 2010 at 02:19:40AM +0200, Sverre Rabbelier wrote: > Heya, > > [-wikitech-l, if they should be kept on the cc please re-add, I assume > that the discussion of the git aspects are not relevant to that list] > > On Sat, Apr 17, 2010 at 01:47, Richard Hartmann > <richih.mailinglist@xxxxxxxxx> wrote: > > This data set is probably the largest set of changes on earth, so > > it's highly interesting to see what git will make of it. > > I think that git might actually be able to handle it. Git's been known > not to handle _large files_ very well, but a lot of history/a lot of > files is something different. Assuming you do the import incrementally > using something like git-fast-import (feeding it with a custom > exporter that uses the dump as it's input) you shouldn't even need an > extraordinary machine to do it (although you'd need a lot of storage). The question would be, how the commits and the trees are laid out. If every wiki revision shall be a git commit, then we'd need to handle 300M commits. And we have 19M wiki pages (that would be files). The tree objects would be very large and git-fast-import would crawl. Some tests with the german wikipedia have shown that importing the blobs is doable on normal hardware. Getting the trees and commits into git was not possible up to now, as fast-import was just to slow (and getting slower after 1M commits). I had the idea of having an importer that would just handle this special case (1 file change per commit), but didn't get around to try that yet. bye, Sebastian -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html