Re: git-fast-import

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Jon Smirl <jonsmirl@xxxxxxxxx> wrote:
> git-fast-import works great. I parsed and built my pack file in
> 1:45hr. That's way better than 24hr. I am still IO bound but that
> seems to be an issue with not being able to read ahead 150K small
> files. CPU utilization averages about 50%.

Excellent.  Now if only the damn RCS files were in a more suitable
format.  :-)
 
> I didn't bother reading the sha ids back from fast-import, instead I
> computed them in the python code. Python has a C library function for
> sha1. That decouple the processes from each other. They would run in
> parallel on SMP.

At least you are IO bound and not CPU bound.  But it is silly for the
importer in Python to be computing the SHA1 IDs and for fast-import
to also be computing them.  Would it help if fast-import allowed
you to feed in a tag string which it dumps to an output file listing
SHA1 and the tag?  Then you can feed that data file back into your
tree/commit processing for revision handling.

> My pack file is 980MB compared to 680MB from other attempts. I am
> still missing entries for the trees and commits.

The delta selection ain't the best.  It may be the case that prior
attempts were combining files to get better delta chains vs. staying
all in one file.  It may be the case that the branches are causing
the delta chains to not be ideal.  I guess I expected slightly
better but not that much; earlier attempts were around 700 MB so
I thought maybe you'd be in the 800 MB ballpark.  Under 1 GB is
still good though as it means its feasible to fit the damn thing
into memory on almost any system, which makes it pretty repackable
with the standard packing code.

Its possible that you are also seeing duplicates in the pack;
I actually wouldn't be surprised if at least 100 MB of that was
duplicates where the author(s) reverted a file revision to an exact
prior revision, such that the SHA1 IDs were the same.  fast-import
(as I have previously said) is stupid and will write the content
out twice rather than "reuse" the existing entry.


Tonight I'll try to improve fast-import.c to include index
generation, and at the same time perform duplicate removal.
That should get you over the GPF in index-pack.c, may reduce disk
usage a little for the new pack, and save you from having to perform
a third pass on the new pack.

-- 
Shawn.
-
: send the line "unsubscribe git" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]