Re: [PATCH 5/7] tmp-objdir: new API for creating and removing primary object dirs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Oct 01 2021, Jeff King wrote:

> On Thu, Sep 30, 2021 at 09:26:37PM -0700, Elijah Newren wrote:
>
>> >  - This side-steps all of our usual code for getting object data into
>> >    memory. In general, I'd expect this content to not be too enormous,
>> >    but it _could_ be if there are many / large blobs in the result. So
>> >    we may end up with large maps. Probably not a big deal on modern
>> >    64-bit systems. Maybe an issue on 32-bit systems, just because of
>> >    virtual address space.
>> >
>> >    Likewise, we do support systems with NO_MMAP. They'd work here, but
>> >    it would probably mean putting all that object data into the heap. I
>> >    could live with that, given how rare such systems are these days, and
>> >    that it only matters if you're using --remerge-diff with big blobs.
>> 
>> Um, I'm starting to get uncomfortable with this pretend_object stuff.
>> Part of the reason that merge-ort isn't truly "in memory" despite
>> attempting to do exactly that, was because for large enough repos with
>> enough files modified on both sides, I wasn't comfortable assuming
>> that all new files from three-way content merges and all new trees fit
>> into memory.  I'm sure we'd be fine with current-day linux kernel
>> sized repos.  No big deal.  In fact, most merges probably don't add
>> more than a few dozen new files.  But for microsoft-sized repos, and
>> with repos tending to grow over time, more so when the tools
>> themselves scale nicely (which we've all been working on enabling),
>> makes me worry there might be enough new objects within a single merge
>> (especially given the recursive inner merges) that we might need to
>> worry about this.
>
> I do think we need to consider that the content might be larger than
> will comfortably fit in memory. But the point of using mmap is that we
> don't have to care. The OS is taking care of it for us (just like it
> would in regular object files).
>
> The question is just whether we're comfortable assuming that mmap
> exists if you're working on such a large repository. I'd guess that big
> repos are pretty painful with out it (and again, I'm not even clear
> which systems Git runs on even lack mmap these days). So IMHO this isn't
> really a blocker for going in this direction.

On the not a blocker: Even without mmap() such a user also has the out
of increasing the size of their swap/page file.

And generally I agree that it's fair for us to say that if you've got
such outsized performance needs you're going to need something more than
the lowest common denominator git can be ported to.

I also wouldn't be surprised if in that scenario we'd run faster if we
were using memory (and no mmap) than if we tried to fallback to the FS,
i.e. your suggestion of replacing N loose objects with one file with
index offsets is basically what the OS would be doing for you as it
pages out memory it can't fit in RAM to disk.



[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]

  Powered by Linux