Re: observations on parsecvs testing

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, 2006-06-15 at 16:37 -0400, Nicolas Pitre wrote:
> My machine is a P4 @ 3GHz with 1GB ram.
> 
> Feeding parsecvs with the Mozilla repository, it first ran for 175 
> minutes with about 98% CPU spent in user space reading the 100458 ,v 
> files and writing 700000+ blob objects.  Memory usage grew to 1789MB 
> total while the resident memory saturated around 700MB.  This part was 
> fine even with 1GB of ram since unused memory was gently pushed to swap.  
> Only problem is that spawned git-pack-object instances started failing 
> with memory allocation by that time, which is unffortunate but not 
> fatal.

Right, the ,v -> blob conversion process uses around 160 bytes per
revision as best I can count (one rev_commit, one rev_file and 
a 41-byte sha1 string); 700000 revisions would therefore use 1.1GB just
for the revision objects. It should be possible to reduce the size of
this data structure fairly significantly; converting the sha1 value to
binary and compressing the CVS revision number to minimal length.
Switching from the general git/cvs structure to this cvs-specific
structure is 'on the list' of things I'd like to do.

> But then things started to go bad after all ,v files were parsed.  The 
> parsecvs dropped to 3% CPU while the rest of the time was spent waiting 
> after swap IO and therefore no substantial progress was made at that 
> point.

Yeah, after this point, parsecvs is merging the computed revision
historys of the individual files into a global history. This means it's
walking across the whole set of files to compute each git commit. For
each branch, it computes the set of files visible at the head of that
branch and then sorts the last revision of the visible files to discover
the last change set along that branch, constructing a commit for each
logical changeset backwards from the present into the past. As it's
constructing commits from the present backwards, it must go all the way
to the past before it can emit any commits to the repository. So, it has
to save them somewhere; right now, it's saving them in memory. What it
could do is construct tree objects for each commit, saving only the sha1
that results and dump the rest of the data. That should save plenty of
memory, but would require a radical restructuring of the code (which is
desparately needed, btw). With this change, parsecvs should actually
*shrink* over time, instead of grow.

> So the Mozilla clearly requires 2GB of ram to realistically be converted 
> to GIT using parsecvs, unless its second phase is reworked to avoid 
> totally random access in memory in order to improve swap behavior, or 
> its in-memory data set is shrinked at least by half.

Changing the data structures used in the first phase will shrink them
significantly; replacing the second state data structures with sha1 tree
hash values and disposing of the first phase objects incrementally
should elicit a shrinking memory pattern rather than growing. It might
well be easier at this point to just take the basic CVS parser and start
afresh though; the code is a horror show of incremental refinements.

> Also rcs2git() is very inefficient especially with files having many 
> revisions as it reconstructs the delta chain on every call.  For example 
> mozilla/configure,v has at least 1690 revisions, and actually converting 
> it into GIT blobs goes at a rate of 2.4 objects per second _only_ on my 
> machine.  Can't objects be created as the delta list is walked/applied 
> instead?  That would significantly reduce the initial convertion time.

Yes, I wanted to do this, but also wanted to ensure that the constructed
versions exactly matched the native rcs output. Starting with 'real' rcs
code seemed likely to ensure the latter. This "should" be easy to fix...

-- 
keith.packard@xxxxxxxxx

Attachment: signature.asc
Description: This is a digitally signed message part


[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]