Re: [PATCH 5/7] tmp-objdir: new API for creating and removing primary object dirs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Sep 30, 2021 at 09:26:37PM -0700, Elijah Newren wrote:

> >  - This side-steps all of our usual code for getting object data into
> >    memory. In general, I'd expect this content to not be too enormous,
> >    but it _could_ be if there are many / large blobs in the result. So
> >    we may end up with large maps. Probably not a big deal on modern
> >    64-bit systems. Maybe an issue on 32-bit systems, just because of
> >    virtual address space.
> >
> >    Likewise, we do support systems with NO_MMAP. They'd work here, but
> >    it would probably mean putting all that object data into the heap. I
> >    could live with that, given how rare such systems are these days, and
> >    that it only matters if you're using --remerge-diff with big blobs.
> 
> Um, I'm starting to get uncomfortable with this pretend_object stuff.
> Part of the reason that merge-ort isn't truly "in memory" despite
> attempting to do exactly that, was because for large enough repos with
> enough files modified on both sides, I wasn't comfortable assuming
> that all new files from three-way content merges and all new trees fit
> into memory.  I'm sure we'd be fine with current-day linux kernel
> sized repos.  No big deal.  In fact, most merges probably don't add
> more than a few dozen new files.  But for microsoft-sized repos, and
> with repos tending to grow over time, more so when the tools
> themselves scale nicely (which we've all been working on enabling),
> makes me worry there might be enough new objects within a single merge
> (especially given the recursive inner merges) that we might need to
> worry about this.

I do think we need to consider that the content might be larger than
will comfortably fit in memory. But the point of using mmap is that we
don't have to care. The OS is taking care of it for us (just like it
would in regular object files).

The question is just whether we're comfortable assuming that mmap
exists if you're working on such a large repository. I'd guess that big
repos are pretty painful with out it (and again, I'm not even clear
which systems Git runs on even lack mmap these days). So IMHO this isn't
really a blocker for going in this direction.

> >    The whole pretend-we-have-this-object thing
> >    may want to likewise make sure we don't write out objects that we
> >    already have in the real odb.
> 
> Right, so I'd have to copy the relevant logic from write_object_file()
> -- I think that means instead of write_object_file()'s calls to
> freshen_packed_object() and freshen_loose_object() that I instead call
> find_pack_entry() and make has_loose_object() in object-file.c not be
> static and then call it.  Does that sound right?

Yes. You _could_ call the same freshening functions, but I think since
we know that our objects are throw-away anyway, we do not even need to
perform that freshening operation. I think just has_object_file() would
be sufficient for your needs.

-Peff



[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]

  Powered by Linux