Re: [PATCH v5 0/5] merge-ort: implement support for packing objects together

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Nov 06, 2023 at 04:46:32PM +0100, Johannes Schindelin wrote:
> Hi,
>
> On Mon, 23 Oct 2023, Junio C Hamano wrote:
>
> > Taylor Blau <me@xxxxxxxxxxxx> writes:
> >
> > > But I think that this approach ended up being less heavy-weight than I
> > > had originally imagined, so I think that this version is a worthwhile
> > > improvement over v4.
> >
> > ;-).
> >
> > This version is a good place to stop, a bit short of going full OO.
> > Nicely done.
>
> I wonder whether a more generic approach would be more desirable, an
> approach that would work for `git replay`, too, for example (where
> streaming objects does not work because they need to be made available
> immediately because subsequent `merge_incore_nonrecursive()` might expect
> the created objects to be present)?

The goal of this series is to bound the number of inodes consumed by a
single merge-tree invocation down from arbitrarily many (in the case of
storing each new object loose) to a constant (by storing everything in a
single pack).

I'd think that we would want a similar approach for 'replay', but as
you note we have some additional requirements, too:

  - each replayed commit is computed in a single step, which will result
    in a new pack
  - we must be able to see objects from previous steps

I think one feasible approach here for replay is to combine the two
ideas and have a separate objdir that stores N packs (one for each step
of the replay), but then repacks them down into a single pack before
migrating back to the main object store.

That would ensure that we have some isolation between replay-created
objects and the rest of the repository in the intermediate state. Even
though we'd have as many packs as there are commits, we'd consume far
fewer inodes in the process, since each commit can introduce arbitrarily
many new objects, each requiring at least a single inode (potentially
more with sharding).

We'd have to be mindful of having a large number of packs, but I think
that this should mostly be a non-issue, since we'd only be living with N
packs for the lifetime of the replay command (before repacking them down
to a single pack and migrating them back to the main object store).

Thanks,
Taylor




[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]

  Powered by Linux