Re: performance on repack

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 8/12/07, Martin Koegler <mkoegler@xxxxxxxxxxxxxxxxx> wrote:
> On Sat, Aug 11, 2007 at 05:12:24PM -0400, Jon Smirl wrote:
> > If anyone is bored and looking for something to do, making the delta
> > code in git repack multithreaded would help. Yesterday I did a big
> > repack that took 20 minutes and it only used one of my four cores. It
> > was compute bound the entire time.
>
> First, how much time is used by the write and how much by the deltify
> phase?

95% in deltify

>
> If the writing phase uses too much time and you have enough free
> memory, you can try to raise the config variable pack.deltacachelimit
> (default 1000). It will save an additional delta operation for all
> object, whose delta is smaller than pack.deltacachelimit by caching
> the delta.

I have 4GB RAM and fast RAID 10 disk, writing happens very quickly.
Pretty much everything is in RAM.

> Have you considered the impact on memory usage, if there are large
> blobs in the repository?

The process size maxed at 650MB. I'm in 64b mode so there is no
virtual memory limit.

On 32b there's windowing code for accessing the packfile since we can
run out of address space, does this code get turned off for 64b?

>
> While repacking, git keeps $window_size (default: 10) objects unpacked
> in memory. For all (except one), it additionally stores the delta
> index, which has about the same size as the object.
>
> So the worst case memory usage is "sizeof(biggest object)*(2*$window_size - 1)".
> If you have blobs >=100 MB, you need some GB of memory.
>
> Partitioning the problem is not trivial:
>
> * To get not worse packing resultes, we must first sort all objects by
>   type, path, size. Then we can split split the list (for each task
>   one part), which we can deltify individually.
>
>   The problems are:
>
>   - We need more memory, as each tasks keeps its own window of
>     $window_size objects (+ delta indexes) in memory.
>
>   - The list must be split in parts, which require the same amount of
>     time. This is difficult, as it depends on the size of the objects as
>     well as how they are stored (delta chain length).
>
> * On the other hand, we could run all try_delta operations for one object
>   parallel. This way, we would need not very much more memory, but
>   require more synchronization (and more complex code).

This solution was my first thought too. Use the main thread to get
everything needed for the object into RAM, then multi-thread the
compute bound, in-memory delta search operation. Shared CPU caches
might make this very fast.

-- 
Jon Smirl
jonsmirl@xxxxxxxxx
-
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]

  Powered by Linux