Jonathan Tan <jonathantanmy@xxxxxxxxxx> writes: > number of blobs fetched as the blame is being run. My biggest concern > is that there is no good limit - I suspect that for a file that is > extensively changed, 10 blobs is too few and you'll need something like > 50 blobs. But 50 blobs means 50 RTTs, which also might be too much for > an end user. Depending on the project, size of a typical change to a blob may be different, so "10 commits that touched this blob" may touch 20% of the contents in one project, but in another that tends to prefer finer-grained commits, it may take 50 commits to make the same amount of change. I agree with you that there is no good default that fits all projects. Do 50 blobs have to mean 50 RTTs? I wonder if there is a good way to say "please give me all necessary tree and blob objects to complete the blobs at path $F for the past 50 commits" to the lazy fetch machinery and receive a single pack that contain all the objects that are listed in "git rev-list --objects HEAD~50.. -- $F"? I am not sure what should happen in the commit in that range where the path $F appears (meaning: the path did not exist, and its contents came from a different path in the parent of that commit). You'd need (a subset of) objects in "git rev-list --objects C^!" for that commit to find out where it came from, but what subset should we use? Fully hydrating the trees of these commits at the rename boundary would ensure you'd catch the same rename in a non-lazy repository, but that is way too much more than what the user can afford (otherwise, you wouldn't be using a narrow clone in the first place). So, I dunno.