observations on parsecvs testing

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



My machine is a P4 @ 3GHz with 1GB ram.

Feeding parsecvs with the Mozilla repository, it first ran for 175 
minutes with about 98% CPU spent in user space reading the 100458 ,v 
files and writing 700000+ blob objects.  Memory usage grew to 1789MB 
total while the resident memory saturated around 700MB.  This part was 
fine even with 1GB of ram since unused memory was gently pushed to swap.  
Only problem is that spawned git-pack-object instances started failing 
with memory allocation by that time, which is unffortunate but not 
fatal.

But then things started to go bad after all ,v files were parsed.  The 
parsecvs dropped to 3% CPU while the rest of the time was spent waiting 
after swap IO and therefore no substantial progress was made at that 
point.

So the Mozilla clearly requires 2GB of ram to realistically be converted 
to GIT using parsecvs, unless its second phase is reworked to avoid 
totally random access in memory in order to improve swap behavior, or 
its in-memory data set is shrinked at least by half.

Also rcs2git() is very inefficient especially with files having many 
revisions as it reconstructs the delta chain on every call.  For example 
mozilla/configure,v has at least 1690 revisions, and actually converting 
it into GIT blobs goes at a rate of 2.4 objects per second _only_ on my 
machine.  Can't objects be created as the delta list is walked/applied 
instead?  That would significantly reduce the initial convertion time.


Nicolas
-
: send the line "unsubscribe git" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]