On Tue, Sep 20, 2022 at 03:28:37PM -0400, Jeff King wrote: > On Mon, Sep 19, 2022 at 10:08:35PM -0400, Taylor Blau wrote: > > > This patch replaces the pre-`--stdin-packs` invocation (where each > > object is given to `pack-objects` one by one) with the more modern > > `--stdin-packs` option. > > > > This allows us to avoid some CPU cycles serializing and deserializing > > every object ID in all of the packs we're aggregating. It also avoids us > > having to send a potentially large amount of data down to > > `pack-objects`. > > Makes sense. Just playing devil's advocate for a moment: is there any > way that getting the list of packs could be worse? I'm thinking > particularly of a race condition where a pack goes away while we're > running, but if we had the actual object list, we could fall back to > finding it elsewhere. > > I think that could only happen if we had two gc's running > simultaneously, which is something we try to avoid already. And the > worst case would be that one would say "oops, this pack went away" and > bail, and not any kind of corruption. > > So I think it's fine, but just trying to talk through any unexpected > implications. Your assumption is right. We perform those pack validity checks pretty early these days, see: 5045759de8 (builtin/pack-objects.c: ensure included `--stdin-packs` exist, 2022-05-24). We *could* handle the case where we know the object names, but the pack file has gone away (either we could open the .idx but not the .pack, or we opened both but then had to close the .pack because of hitting the max open file-descriptor limit). ...But it gets tricky in practice, and 5045759de8 has some of those details. At worst we'd complain that one of the packs listed is gone, and then fail to repack (while maintaining the non-corruptedness of the repository). > > But more importantly, it generates slightly higher quality (read: more > > tightly compressed) packs, because of the reachability traversal that > > `--stdin-packs` does after the fact in order to gather namehash values > > which seed the delta selection process. > > I think we _could_ do that same traversal even in objects mode. Or do > --stdin-packs without it. If we were starting from scratch, it might be > nice for the two features to be orthogonal so we could evaluate the > changes independently. But I don't think it's worth going back and > trying to split them out now. Although... It's relatively easy to do `--stdin-packs` without the traversal. I wouldn't be opposed to doing that here. > > In practice, this seems to add a slight amount of overhead (on the order > > of a few seconds for git.git broken up into ~100 packs), in exchange for > > a modest reduction (on the order of ~3.5%) in the resulting pack size. > > Hmm. I thought we'd have some code to reuse the cached name-hashes in > the .bitmap file, if one is present. But I don't see any such code in > the stdin-packs feature. I think for "repack --geometric" it doesn't > matter. There the "main" pack with the bitmap would also be excluded > from the rollup (unless we are rolling all-into-one, in which case we do > the full from-scratch repack with a traversal). Right. > Is that true also of "multi-pack-index repack"? I guess it would depend > on how you invoke it. I admit I don't think I've ever used it myself, > since the new "repack --geometric --write-midx" approach matches my > mental model. I'm not sure when you'd actually run the "multi-pack-index > repack" command. But if you did it with --batch-size=0 (the default), I > think we'd end up traversing every object in history. We could probably benefit from it, but only if there is a MIDX bitmap around to begin with. For instance, you could first try and lookup each object you're missing a namehash for and then read its value out of the hashcache extension in the MIDX bitmap (assuming the MIDX bitmap exists, and has a hashcache). But if you don't have a MIDX bitmap, or it has a poor selection of commits, then you're out of luck. > > @@ -2026,17 +2027,17 @@ int midx_repack(struct repository *r, const char *object_dir, size_t batch_size, > > > > cmd_in = xfdopen(cmd.in, "w"); > > > > - for (i = 0; i < m->num_objects; i++) { > > - struct object_id oid; > > - uint32_t pack_int_id = nth_midxed_pack_int_id(m, i); > > + for (i = 0; i < m->num_packs; i++) { > > + strbuf_reset(&scratch); > > The old code went in object order within the midx. Is this sorted by > sha1, or the pack pseudo-order? If the former, then that will yield a > different order of objects inside pack-objects (since it is seeing the > packs in order of our m->pack_names array). I don't _think_ it matters, > but I just wanted to double check. Good point. This ends up ordering the packs based on their SHA-1 checksum, and probably should stick to the pack mtimes instead. Unfortunately, we discard that information by the time we get to this point in midx_repack(). We don't even have it written durably in the MIDX, either, so we reconstruct it on-the-fly in fill_included_packs_batch() (see the `QSORT()` call there with `compare_by_mtime()`). I agree that it probably doesn't matter in practice, but it's worth trying to match the existing behavior, at least. Thanks, Taylor