Re: [PATCH v4 4/6] pack-objects: generate cruft packs at most one object over threshold

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Mar 12, 2025 at 12:13:10PM -0700, Elijah Newren wrote:
> > > > But in current Git we will keep repacking
> > > > the two together, only to generate the same two packs we started with
> > > > forever.
> > >
> > > Yes.  That is because the logic that decides these packs need to be
> > > broken and recombined is flawed.  Maybe it does not have sufficient
> > > information to decide that it is no use to attempt combining them,
> > > in which case leaving some more info to help the later invocation of
> > > repack to tell that it would be useless to attempt combining these
> > > packs when you do the initial repack would help, which was what I
> > > suggested.  You've thought about the issue much longer than I did,
> > > and would be able to come up with better ideas.
> >
> > I think in the short term I came up with a worse idea than you would
> > have ;-).
> >
> > Probably there is a way to improve this niche case as described above,
> > but I think the solution space is probably complicated enough that given
> > how narrow of a case it is that it's not worth introducing that much
> > complexity.
>
> Would it make sense to break the assumption that --max-cruft-size ==
> --max-pack-size and perhaps rename the former?  I think the problem is
> that the two imply different things (one is a minimum, the other a
> maximum), and thus really should be different values.  E.g.
> --combine-cruft-below-size that is set to e.g. half of
> --max-pack-size, and then you can continue combining cruft packs
> together until they do go above the cruft threshold, while avoiding
> actually exceeding the pack size threshold?

We could, though alternatively I think leaving the behavior as is
presented in v3 is equally OK.

We'll never make a pack that is as large or larger than the given
--max-cruft-size, but because repack tries to aggregate smaller packs
together first, it's unlikely that we would ever repeatedly repack the
larger ones making them effectively "frozen", which is the goal.

Thanks,
Taylor




[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]

  Powered by Linux