Re: Figured out how to get Mozilla into git

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Apologies, I dropped out of the conversation -- Friday night drinks
(NZ timezone) took over ;-)

Now, back on track...

On 6/10/06, Linus Torvalds <torvalds@xxxxxxxx> wrote:
In the meantime, the fact that git-cvsimport can be done incrementally
means that once we have the silly pack-file-mapping details worked out, it
should be perfectly fine to run the 3-day import just once, and then work
on it incrementally afterwards without any real problems.

Exactly. The dog at this time is cvsps -- I also remember vague
promises from a list regular of publishing a git repo with cvsps2.1 +
some patches from the list.

In any case, and for the record, my cvsps is 2.1 pristine. It handles
the mozilla repo alright, as long as I give it a lot of RAM. I _think_
it slurped 3GB with the mozilla cvs.

I want to review that cvs2svn importer, probably to steal the test
cases and perhaps some logic to revamp/replace cvsps. The thing is --
we can't just drop/replace cvsimport because it does incrementals, so
continuity and consistency are key. All the CVS imports have to take
some hard decisions when the data is bad -- however it is we fudge it,
we kind of want to fudge it consistently ;-)

So people like you who want to work on it off-line using a distributed
system _can_ do so, realistically. Maybe not practically _today_

Other than "don't run repack -a", it's feasible. In fact, that's how I
use git 99% of the time -- to do DSCM stuff on projects that are using
CVS, like Moodle.

  The poor python/perl guys may write things more
  quickly, but when they hit a language wall, they hit it.

Flamebait anyone? ;-) It is a different kind of fun -- let's say that
on top of knowing the performance tricks (or, to be more hip: "design
patterns") for the hardware and OS, you also end up learning the
performance tricks of the interpreter/vm/whatever.

> It would be better to rsync Martins copy, he has a lot more bandwidth.

I'm coming down to the office now to pick up my laptop, and I'll rsync
it out to our git machine (also NZ kernel mirror, bandwidth should be
good). That's one of the things I've discovered with these large
trees: for the initial publish action, I just use rsync or scp.
Perhaps I'm doing it wrong, but git-push doesn't optimise the
'initialise repo', and it take ages (and it this case, it'd probably
OOM).

So it will take me quite some time to download 2GB+, regardless of how fat
a pipe the other end has ;)

Right-o. Linus, Jon, can you guys then ping me when you have cloned it
safely so I can take it down again?

cheers,


martin
-
: send the line "unsubscribe git" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]