Re: git-fast-import

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 8/5/06, Shawn Pearce <spearce@xxxxxxxxxxx> wrote:
Jon Smirl <jonsmirl@xxxxxxxxx> wrote:
> git-fast-import works great. I parsed and built my pack file in
> 1:45hr. That's way better than 24hr. I am still IO bound but that
> seems to be an issue with not being able to read ahead 150K small
> files. CPU utilization averages about 50%.

Excellent.  Now if only the damn RCS files were in a more suitable
format.  :-)

> I didn't bother reading the sha ids back from fast-import, instead I
> computed them in the python code. Python has a C library function for
> sha1. That decouple the processes from each other. They would run in
> parallel on SMP.

At least you are IO bound and not CPU bound.  But it is silly for the
importer in Python to be computing the SHA1 IDs and for fast-import
to also be computing them.  Would it help if fast-import allowed
you to feed in a tag string which it dumps to an output file listing
SHA1 and the tag?  Then you can feed that data file back into your
tree/commit processing for revision handling.

I am IO bound, there is plenty of CPU and I am on a 2.8Ghz single processor.
The sha1 is getting stored into an internal Python structure. The
structures then get sliced and diced a thousand ways to compute the
change sets.

The real goal of this is to use the cvs2svn code for change set
detection. Look at how much work these guys have put into it making it
work on the various messed up CVS repositories.
http://git.catalyst.net.nz/gitweb?p=cvs2svn.git;a=shortlog;h=a9167614a7acec27e122ccf948d1602ffe5a0c4b

cvs2svn is the only tool that read and built change sets for Moz CVS
on the first try.

> My pack file is 980MB compared to 680MB from other attempts. I am
> still missing entries for the trees and commits.

The delta selection ain't the best.  It may be the case that prior
attempts were combining files to get better delta chains vs. staying

My suspicion is that prior attempts weren't capturing all of the
revisions. I know cvsps (the 680MB repo) was throwing away branches
that it didn't understand. I don't think anyone got parsecvs to run to
completion. MozCVS has 1,500 branches.

all in one file.  It may be the case that the branches are causing
the delta chains to not be ideal.  I guess I expected slightly
better but not that much; earlier attempts were around 700 MB so
I thought maybe you'd be in the 800 MB ballpark.  Under 1 GB is
still good though as it means its feasible to fit the damn thing
into memory on almost any system, which makes it pretty repackable
with the standard packing code.

I am still missing all of the commits and trees. Don't know how much
they will add yet.

Its possible that you are also seeing duplicates in the pack;
I actually wouldn't be surprised if at least 100 MB of that was
duplicates where the author(s) reverted a file revision to an exact
prior revision, such that the SHA1 IDs were the same.  fast-import
(as I have previously said) is stupid and will write the content
out twice rather than "reuse" the existing entry.

Tonight I'll try to improve fast-import.c to include index
generation, and at the same time perform duplicate removal.
That should get you over the GPF in index-pack.c, may reduce disk
usage a little for the new pack, and save you from having to perform
a third pass on the new pack.

Sounds like a good plan.

--
Jon Smirl
jonsmirl@xxxxxxxxx
-
: send the line "unsubscribe git" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]