Re: [PATCH 2/5] backfill: basic functionality and tests

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 12/16/24 3:01 AM, Patrick Steinhardt wrote:
> On Fri, Dec 06, 2024 at 08:07:15PM +0000, Derrick Stolee via GitGitGadget wrote:

>> +The `git backfill` command provides a way for the user to request that
>> +Git downloads the missing blobs (with optional filters) such that the
>> +missing blobs representing historical versions of files can be downloaded
>> +in batches. The `backfill` command attempts to optimize the request by
>> +grouping blobs that appear at the same path, hopefully leading to good
>> +delta compression in the packfile sent by the server.
>
> Hm. So we're asking the user to fix a usability issue of git-blame(1),
> don't we? Ideally, git-blame(1) itself should know to transparently
> batch the blobs it requires to compute its output, shouldn't it? That
> usecase alone doesn't yet convince me that git-backfill(1) is a good
> idea as I'd think we should rather fix the underlying issue.

I've looked into making this change for 'git blame' and it is a
nontrivial change. I'm not sure on the timeline that we could expect
'git blame' to be improved.

But you're right that this is not enough justification on its own. It's
an example of a kind of command that has these problems, including 'git
log [-p|-L]'.

> So are there other usecases for git-backfill(1)? I can imagine that it
> might be helpful in the context of scripts that know they'll operate on
> a bunch of blobs.

One major motivation is that it can essentially provide a way to do
resumable clones: get the commits and trees in one go with a blobless
clone, then download the blobs in batches. If something interrupts the
'git backfill' command, then restarting it will only repeat the most
recent batch.

Finally, in a later patch we can see that the --sparse option allows a
user to operate as if they have a full clone but where they only include
blob data within their sparse-checkout, providing reduced disk space and
network time to get to that state.

>> +	if (ctx->current_batch.nr >= ctx->batch_size)
>> +		download_batch(ctx);
>
> Okay, so the batch size is just "best effort". If we walk a tree that
> makes us exceed the batch size then we wouldn't issue a fetch during the
> tree walk. Is there any specific reason for this behaviour?
>
> In any case, as long as this is properly documented I think this should
> be fine in general.

The main reason is that we expect the server to return a packfile that
has many delta relationships within the objects at a given path. If we
split the batch in the middle of a path batch, then we are artificially
introducing breaks in the delta chains that could be wasteful.

However, this batching pattern could be problematic if there are a
million versions of a single file and the batch is too large to download
efficiently. This "absolute max batch size" is currently left as a
future extension.

>> +	clear_backfill_context(ctx);
>
> Are we leaking `revs` and `info`?

At the moment. Will fix.

>> +	return ret;
>> +}
>> +
>> int cmd_backfill(int argc, const char **argv, const char *prefix, struct repository *repo)
>>   {
>> +	struct backfill_context ctx = {
>> +		.repo = repo,
>> +		.current_batch = OID_ARRAY_INIT,
>> +		.batch_size = 50000,
>> +	};
>>   	struct option options[] = {
>>   		OPT_END(),
>>   	};
>> @@ -23,7 +123,5 @@ int cmd_backfill(int argc, const char **argv, const char *prefix, struct reposit
>>
>>   	repo_config(repo, git_default_config, NULL);
>>
>> -	die(_("not implemented"));
>> -
>> -	return 0;
>> +	return do_backfill(&ctx);
>>   }
>
> The current iteration only backfills blobs as far as I can see. Do we
> maybe want to keep the door open for future changes in git-backfill(1)
> by implementing this via a "blob" subcommand?

I think that one tricky part is to ask "what could be missing?". With
Git's partial clone, it seems that we could have treeless or depth-
based tree restrictions. Technically, there could also be clones that
restrict to a set of sparse-checkout patterns, but I'm not aware of any
server that will respect those kinds of clones.  At the moment, the tree
walk would fault in any missing trees as they are seen, but this is
extremely inefficient.

I think that the path-walk API could be adjusted to be more careful to
check for the existence of an object before automatically loading it.
That would allow for batched downloads of missing trees, though a
second scan would be required to get the next layer of objects.

I'm not sure a subcommand is the right way to solve for this potential
future, but instead later we could adjust the logic to be better for
these treeless or tree-restricted clones.

Thanks,
-Stolee




[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]

  Powered by Linux