Re: [PATCH 5/5] midx: implement midx_repack()

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 12/10/2018 9:32 PM, Stefan Beller wrote:
On Mon, Dec 10, 2018 at 10:06 AM Derrick Stolee via GitGitGadget
<gitgitgadget@xxxxxxxxx> wrote:
From: Derrick Stolee <dstolee@xxxxxxxxxxxxx>

To repack using a multi-pack-index, first sort all pack-files by
their modified time. Second, walk those pack-files from oldest
to newest, adding the packs to a list if they are smaller than the
given pack-size. Finally, collect the objects from the multi-pack-
index that are in those packs and send them to 'git pack-objects'.
Makes sense.

With this operation we only coalesce some packfiles into a new
pack file. So to perform the "complete" repack this command
has to be run repeatedly until there is at most one packfile
left that is smaller than batch size.

Well, the batch size essentially means "If a pack-file is larger than <size>, then leave it be. I'm happy with packs that large." This assumes that the reason the pack is that large is because it was already combined with other packs or contains a lot of objects.


Imagine the following scenario:

   There are 5 packfiles A, B, C, D, E,
   created last Monday thru Friday (A is oldest, E youngest).
   The sizes are [A=4, B=6, C=5, D=5, E=4]

   You'd issue a repack with batch size=10, such that
   A and B would be repacked into F, which is
   created today, size is less or equal than 10.

   You issue another repack tomorrow, which then would
   coalesce C and D to G, which is
   dated tomorrow, size is less or equal to 10 as well.

   You issue a third repack, which then takes E
   (as it is the oldest) and would probably find F as the
   next oldest (assuming it is less than 10), to repack
   into H.

   H is then compromised of A, B and E, and G is C+D.

In a way these repacks, always picking up the oldest,
sound like you "roll forward" objects into new packs.
As the new packs are newest (we have no packs from
the future), we'd cycle through different packs to look at
for packing on each repacking.

It is however more likely that content is more similar
on a temporal basis. (e.g. I am boldly claiming that
[ABC, DE] would take less space than [ABE, CD]
as produced above).

(The obvious solution to this hypothetical would be
to backdate the resulting pack to the youngest pack
that is input to the new pack, but I dislike fudging with
the time a file is created/touched, so let's not go there)

This raises a good point about what happens when we "roll over" into the "repacked" packs.

I'm not claiming that this is an optimal way to save space, but is a way to incrementally collect small packs into slightly larger packs, all while not interrupting concurrent Git commands. Reducing pack count improves data locality, which is my goal here. In our environment, we do see reduced space as a benefit, even if it is not optimal.


Would the object count make sense as input instead of
the pack date?


While first designing a 'git multi-pack-index repack' operation, I
started by collecting the batches based on the size of the objects
instead of the size of the pack-files. This allows repacking a
large pack-file that has very few referencd objects. However, this
referenced

came at a significant cost of parsing pack-files instead of simply
reading the multi-pack-index and getting the file information for
the pack-files. This object-size idea could be a direction for
future expansion in this area.
Ah, that also explains why the above idea is toast.

Would it make sense to extend or annotate the midx file
to give hints at which packs are easy to combine?

I guess such an "annotation worker" could run in a separate
thread / pool with the lowest priority as this seems like a
decent fallback for the lack of any better information how
to pick the packfiles.

One idea I had earlier (and is in Documentation/technical/multi-pack-index.txt) is to have the midx track metadata about pack-files. We could avoid this "rollover" problem by tracking which packs were repacked using this mechanism. This could create a "pack generation" value, and we could collect a batch of packs that have the same generation. This does seem a bit overcomplicated for the potential benefit, and could waste better use of that metadata concept. For instance, we could use the metadata to track the information given by ".keep" and ".promisor" files.

Thanks,

-Stolee




[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]

  Powered by Linux