Re: Reduce pack-objects memory footprint?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Feb 28, 2018 at 5:17 PM, Jeff King <peff@xxxxxxxx> wrote:
> On Wed, Feb 28, 2018 at 04:27:22PM +0700, Duy Nguyen wrote:
>
>> linux-2.6.git current has 6483999 objects. "git gc" on my poor laptop
>> consumes 1.7G out of 4G RAM, pushing lots of data to swap and making
>> all apps nearly unusuable (granted the problem is partly Linux I/O
>> scheduler too). So I wonder if we can reduce pack-objects memory
>> footprint a bit.
>
> Yeah, the per object memory footprint is not great. Around 100 million
> objects it becomes pretty ridiculous. I started to dig into it a year or
> three ago when I saw such a case, but it turned out to be something that
> we could prune.

We could? What could we prune?

> The torvalds/linux fork network has ~23 million objects,
> so it's probably 7-8 GB of book-keeping. Which is gross, but 64GB in a
> server isn't uncommon these days.

I wonder if we could just do book keeping for some but not all objects
because all objects simply do not scale. Say we have a big pack of
many GBs, could we keep the 80% of its bottom untouched, register the
top 20% (mostly non-blobs, and some more blobs as delta base) for
repack? We copy the bottom part to the new pack byte-by-byte, then
pack-objects rebuilds the top part with objects from other sources.

That would of course be less optimal because we can't make delta
against those objects at the bottom. And this makes the assumption
that these packs are generated using the same heuristics that we use.

> I think laptops repacking the kernel are probably one of the worst cases
> (leaving aside the awful Windows repository, but my impression is that
> they simply can't do a full repack at all there).

Yeah. My fear is "git gc --auto" kicking in. Even on a laptop we
should support working on a repo as big as linux-2.6 (or as smalls as
one from facebook/microsoft perspective). I probably will add a mode
that keeps the largest pack alone and just pack the rest in a second
pack in "git gc --auto". That might help (I haven't really checked the
details yet, but object_entry book keeping should go down at least)

>> This demonstration patch (probably breaks some tests) would reduce the
>> size of struct object_entry from from 136 down to 112 bytes on
>> x86-64. There are 6483999 of these objects, so the saving is 17% or
>> 148 MB.
>
> 136 x 6.5M objects is only about 800MB. I suspect a big chunk of the
> rest is going to the object structs we create when doing the internal
> rev-list traversal. And those duplicate the 20-byte sha1s at the very
> least.

They are 32 bytes per entry, so it should take less than object_entry.
I briefly wondered if we should fall back to external rev-list too,
just to free that memory.

So about 200 MB for those objects (or maybe more for commits). Add 256
MB delta cache on top, it's still a bit far from 1.7G. There's
something I'm still missing.

> Another option would be to somehow replace the pack_idx_entry with a
> reference to a "struct object". That would let us at least avoid storing
> the 20-byte oid twice.

20 bytes saving? /me drools.

Pity we can't do the same for 'struct object'. Most of the time we
have a giant .idx file with most hashes. We could look up in both
places: the hash table in object.c, and the idx file, to find an
object. Then those objects that are associated with .idx file will not
need "oid" field (needed to as key for the hash table). But I see no
way to make that change.

>> If we go further, notice that nr_objects is uint32_t, we could convert
>> the three pointers
>>
>>       struct object_entry *delta;
>>       struct object_entry *delta_child;
>>       struct object_entry *delta_sibling;
>>
>> to
>>
>>       uint32_t delta;
>>       uint32_t delta_child;
>>       uint32_t delta_sibling;
>>
>> which saves 12 bytes (or another 74 MB). 222 MB total is plenty of
>> space to keep some file cache from being evicted.
>
> Yeah, that seems like low-hanging fruit. I'd also note that we don't
> actually use all of the fields during the whole process. I think some of
> those delta fields are only used for a short time. So we might be able
> to reduce peak memory if there are some mutually exclusive bits of each
> entry (and even if there's some overlap, we'd reduce the length of time
> we'd need to be at peak).

Yeah it looks that way to me too. Now that we have packing_data,
splitting object_entry[] to multiple arrays (that is only needed at
some point) may be very nice.

>> Is it worth doing this? The struct packing makes it harder to read
>> (and more fragile too). I added some more artifical limit like max
>> depth of 2^11. But I think 4096+ depth is getting unreasonable.
>
> I'm OK with a limit like 4096, as long as we notice when we hit the
> limit and behave reasonably. I think the algorithm in
> break_delta_chains() may temporarily store up to uint32_t depth. But I
> think we won't ever write anything into cur->depth larger than the
> max-depth limit. So it would probably be sufficient to just check that
> the --depth argument is reasonably sized and complain otherwise.
>
> I do agree this makes things a bit harder to read, but I think the
> benefits are pretty measurable. And may make a real usability difference
> on a large repository.

Thanks. I'll go make real patches then (with lots more checks to make
sure we don't overflow in shortened fields like depth or type).
-- 
Duy



[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]

  Powered by Linux