Nguyễn Thái Ngọc Duy <pclouds@xxxxxxxxx> writes: > diff --git a/builtin/pack-objects.c b/builtin/pack-objects.c > index 417c830..c58a9cb 100644 > --- a/builtin/pack-objects.c > +++ b/builtin/pack-objects.c > @@ -2709,6 +2709,11 @@ int cmd_pack_objects(int argc, const char **argv, const char *prefix) > if (get_oid_hex(skip_hash_hex, &skip_hash)) > die(_("%s is not SHA-1"), skip_hash_hex); > } > + > + /* > + * Parallel delta search can't produce stable packs. > + */ > + delta_search_threads = 1; > } > > argv_array_push(&rp, "pack-objects"); A multi-threaded packing is _one_ source of regenerating the same pack for the same set of objects, but we shouldn't be tying our hands by promising it will forever be the _only_ source of it by doing things like this. We may want to dynamically tweak the packing behaviour depending on the load of the minute and such for example. This is an indication that the approach this series takes is taking us in a wrong direction. I think a more sensible approach for "resuming" is to attack cloning first. Take a reasonable baseline snapshot periodically (depending on the activity level of the project, the interval may span between 12 hours to 2 weeks and you would want to make it configurable) to create a bundle, teach "clone" to check the bundle first and perform a resumable and bulk transfer for the stable part of the history (e.g. up to the latest tag or a branch that does not rewind, the set of refs to use as the stable anchors you would want to make configurable), and then fill the gap between that baseline snapshot and up-to-date state by doing another round of "git fetch" and then you'd have solved the most serious problem already. -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html