Re: Git memory usage (1): fast-import

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Sam Hocevar <sam@xxxxxxx> wrote:
>    I joined a project that uses very large binary files (up to 1 GiB) in
> a p4 repository and as I would like to use Git, I am trying to make it
> more memory-efficient when handling huge files.

Yikes.  As you saw, this won't play well...
 
>    In practice, it takes even more memory than that. Experiment shows
> that importing six 100 MiB files made of urandom data takes 370 MiB of
> memory [...]

Yes.

As you saw, this is the last object, the current object, the delta
index of the last object (in order to more efficiently compare the
current one to it), and the deflate buffer for the current object,
oh, and probably memory fragmentation....

I'm not surprised a 100 MiB file turned into 370 MiB heap usage.
 
>    - stop trying to compute deltas in fast-import and leave that task
>    to other tools

This isn't practical for source code imports, unless we do...

> (optionally, define a file size threshold beyond
>    which the last file is not kept in memory, and maybe make that a
>    configuration option).

what you suggest here.  fast-import is faster than other methods
because we get some delta compression on the content, so the output
pack uses up less virtual memory when the front-end or end-user
finally gets around to doing `git repack -a -d -f` to recompute
the delta chains.

>    - use a temporary file to store the deflate data when it reaches a
>    given size threshold (and maybe make that a configuration option).

Zoiks.  There's no reason for that.

A better method would be to just look at the size of the incoming
blob, and if its over some configured threshold (default e.g. 100
MB is perhaps sane) we just stream the data through deflate()
and into the pack file, with no delta compression.

That would also bypass the "massive" buffer in the last object slot,
as you point out above.
 
>    - also, I haven't tracked all strbuf_* uses in fast-import, but I got
>    the feeling that strbuf_release() could be used in a few places
>    instead of strbuf_setlen(0) in order to free some memory.

Examples?  I haven't gone through the code in detail since it
was modified to use strbufs.  But I had the feeling that the code
wasn't freeing strbufs that it would just reuse on the next command,
and that are likely to be "smallish", e.g. just a few KiBs in size.

-- 
Shawn.
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]

  Powered by Linux