Glen Choo <chooglen@xxxxxxxxxx> writes: > I'm not certain that this fail-fast approach is always a better user > experience: > > - I could imagine that for a small-enough set of objects (say, a very > restrictive set of sparse specifications), one-by-one fetching would be > good enough. I think that in this case, if you couldn't fetch a small set, you wouldn't be able to fetch a single object too. > - Even if one-by-one fetching isn't fast, I'd imagine that each > individual fetch is more likely to succeed than a batch prefetch, and > as a user, I would prefer to ^C an operation that takes longer than I > expected than to have retry the repeatedly. Hmm...but when you ^C, you have to retry it too right? > Here are some other arguments that you didn't mention, but I find more > convincing: > > - Running prefetch in a non-interactive process (e.g. running a job in > CI) and the user would prefer to fail fast than to have the job run > longer than expected, e.g. they could script retries manually > (although, maybe we should do that ourselves). That's true. > - Fetching might be subject to a quota, which will be exhausted by > one-by-one fetching. I'll add a note that lengthy execution can be bad for quota reasons as well (in addition to others). > As such, it _might_ make sense to make this behavior configurable, since > we may sometimes want it and sometimes not. I don't think there is a compelling reason to fallback to single object fetching (which is there not because it is useful, but just so that Git commands that haven't been updated to prefetch will still function), so I'd rather not add an option for this.