on Wed Jan 28 2009, David Abrahams <dave-AT-boostpro.com> wrote: > On Wed, 28 Jan 2009 00:02:25 -0500, Jeff King <peff@xxxxxxxx> wrote: >> On Tue, Jan 27, 2009 at 10:04:42AM -0500, David Abrahams wrote: >> >>> I've been abusing Git for a purpose it wasn't intended to serve: >>> archiving a large number of files with many duplicates and >>> near-duplicates. Every once in a while, when trying to do something >>> really big, it tells me "malloc failed" and bails out (I think it's >>> during "git add" but because of the way I issued the commands I can't >>> tell: it could have been a commit or a gc). This is on a 64-bit linux >>> machine with 8G of ram and plenty of swap space, so I'm surprised. >>> >>> Git is doing an amazing job at archiving and compressing all this stuff >>> I'm putting in it, but I have to do it a wee bit at a time or it craps >>> out. Bug? >> >> How big is the repository? How big are the biggest files? I have a >> 3.5G repo with files ranging from a few bytes to about 180M. I've never >> run into malloc problems or gone into swap on my measly 1G box. >> How does your dataset compare? > > I'll try to do some research. Gotta go pick up my boy now... Well, moving the 2.6G .dar backup binary out of the fileset seems to have helped a little, not surprisingly :-P I don't know whether anyone on this list should care about that failure given the level of abuse I'm inflicting on Git, but keep in mind that the system *does* have 8G of memory. Conclude what you will from that, I suppose! -- Dave Abrahams BoostPro Computing http://www.boostpro.com -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html