On 8/5/06, Shawn Pearce <spearce@xxxxxxxxxxx> wrote:
Jon Smirl <jonsmirl@xxxxxxxxx> wrote: > Process size is 2.6GB when the seg fault happen. That's a lot of > memory to build a pack index over 1M objects. > > I'm running a 3:1 process address space split. I wonder why it didn't > grow all the way to 3GB. I still have RAM and swap available. Was the pack you are trying to index built with that fast-import.c I sent last night? Its possible its doing something weird that pack-index can't handle, such as insert a duplicate object into the same pack...
built with fast-import.
How big is the pack file? I'd expect pack-index to be using something around 24 MB of memory (24 bytes/entry) but maybe its hanging onto a lot of data (memory leak?) as it decompresses the entries to compute the checksums.
It is 934MB in size with 985,000 entries. Why does resolve_delta in index-pack.c need to be recursive? Is there a better way to code that routine? If it mmaps the file that uses 1GB address space, why does it need another 1.5GB to build an index? I had a prior 400MB pack file built with fast-import that I was able to index ok. -- Jon Smirl jonsmirl@xxxxxxxxx - : send the line "unsubscribe git" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html