Jonathan Tan <jonathantanmy@xxxxxxxxxx> writes: > Whenever a lazy fetch is performed for a tree object, any trees and > blobs it directly or indirectly references will be fetched as well. > There is a "no_dependents" argument in struct fetch_pack_args that > indicates that objects that the wanted object references need not be > sent, but it currently has no effect other than to inhibit usage of > object flags. > > Extend the "no_dependents" argument to also exclude sending of objects > as much as the current protocol allows: when fetching a tree, all trees > it references will be sent (but not the blobs), and when fetching a > blob, it will still be sent. (If this mechanism is used to fetch a > commit or any other non-blob object, all referenced objects, except > blobs, will be sent.) The client neither needs to know or specify the > type of each object it wants. > > The necessary code change is done in fetch_pack() instead of somewhere > closer to where the "filter" instruction is written to the wire so that > only one part of the code needs to be changed in order for users of all > protocol versions to benefit from this optimization. It is very clear how you are churning the code, but it is utterly unclear from the description what you perceived as a problem and why this change is a good (if not the best) solution for that problem, at least to me. After reading the above description, I cannot shake the feeling that this is tied too strongly to the tree:0 use case? Does it help other use cases (e.g. would it be useful or harmful if a lazy clone was done to exclude blobs that are larger than certain threshold, or objects of all types that are not referenced by commits younger than certain threshold)?