Rocco Rutte <pdmef@xxxxxxx> wrote: > The performance bottleneck is hg exporting data, as discovered by people > on #mercurial, the problem is not really fixable and is due to hg's > revlog handling. As a result, I needed to let the script feed the full > contents of the repository at each revision we walk (i.e. all for the > initial import) into git-fast-import. I thought that hg stored file revisions such that each source file (e.g. foo.c) had its own revision file (e.g. foo.revdata) and that every revision of foo.c was stored in that one file, ordered from oldest to newest? If that is the case why not strip all of those into fast-import up front, doing one source file at a time as a huge series of blobs and mark them, then do the commit/trees later on using only the marks? Or am I just missing something about hg? > This is horribly slow. For mutt > which contains several tags, a handfull of branches and only 5k commits > this takes roughly two hours at 1 commit/sec. Not fast-import's fault. ;-) > Somewhat related: It would be really nice to teach git-fast-import to > init from a previously saved mark file. Right now I use hg revision > numbers as marks, let git-fast-import save them, and read them back next > time. These are needed to map hg revisions to git SHA1s in case I need > to reference something in an incremental import from an earlier run. It > would be nice if git-fast-import could do this on its own so that all > consumers can benefit and can have persistent marks accross sessions. Sure, that sounds pretty easy. I'll try to work that up later today or tomorrow. -- Shawn. - To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html