Re: git-p4 out of memory for very large repository

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Aug 23, 2013 at 08:16:58AM +0100, Luke Diamand wrote:
> On 23/08/13 02:12, Corey Thompson wrote:
> >Hello,
> >
> >Has anyone actually gotten git-p4 to clone a large Perforce repository?
> 
> Yes. I've cloned repos with a couple of Gig of files.
> 
> >I have one codebase in particular that gets to about 67%, then
> >consistently gets get-fast-import (and often times a few other
> >processes) killed by the OOM killer.
> 
> What size is this codebase? Which version and platform of git are you using?
> 
> Maybe it's a regression, or perhaps you've hit some new, previously
> unknown size limit?
> 
> Thanks
> Luke
> 
> 
> >
> >I've found some patches out there that claim to resolve this, but
> >they're all for versions of git-p4.py from several years ago.  Not only
> >will they not apply cleanly, but as far as I can tell the issues that
> >these patches are meant to address aren't in the current version,
> >anyway.
> >
> >Any suggestions would be greatly appreciated.
> >
> >Thanks,
> >Corey
> >--
> >To unsubscribe from this list: send the line "unsubscribe git" in
> >the body of a message to majordomo@xxxxxxxxxxxxxxx
> >More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 

Sorry, I guess I could have included more details in my original post.
Since then, I have also made an attempt to clone another (slightly more
recent) branch, and at last had success.  So I see this does indeed
work, it just seems to be very unhappy with one particular branch.

So, here are a few statistics I collected on the two branches.

branch-that-fails:
total workspace disk usage (current head): 12GB
68 files over 20MB
largest three being about 118MB

branch-that-clones:
total workspace disk usage (current head): 11GB
22 files over 20MB
largest three being about 80MB

I suspect that part of the problem here might be that my company likes
to submit very large binaries into our repo (.tar.gzs, pre-compiled
third party binaries, etc.).

Is there any way I can clone this in pieces?  The best I've come up with
is to clone only up to a change number just before it tends to fail, and
then rebase to the latest.  My clone succeeded, but the rebase still
runs out of memory.  It would be great if I could specify a change
number to rebase up to, so that I can just take this thing a few hundred
changes at a time.

Thanks,
Corey
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]