No real comments on the code - I'm not familiar enough with it, but it seems really simple anyway. Jonathan Tan <jonathantanmy@xxxxxxxxxx> writes: > When this batch prefetch fails, these commands fall back to the > sequential fetches. But at $DAYJOB we have noticed that this results in > a bad user experience: a command would take unexpectedly long to finish > if the batch prefetch would fail for some intermittent reason, but all > subsequent fetches would work. It would be a better user experience for > such a command would just fail. I'm not certain that this fail-fast approach is always a better user experience: - I could imagine that for a small-enough set of objects (say, a very restrictive set of sparse specifications), one-by-one fetching would be good enough. - Even if one-by-one fetching isn't fast, I'd imagine that each individual fetch is more likely to succeed than a batch prefetch, and as a user, I would prefer to ^C an operation that takes longer than I expected than to have retry the repeatedly. Here are some other arguments that you didn't mention, but I find more convincing: - Running prefetch in a non-interactive process (e.g. running a job in CI) and the user would prefer to fail fast than to have the job run longer than expected, e.g. they could script retries manually (although, maybe we should do that ourselves). - Fetching might be subject to a quota, which will be exhausted by one-by-one fetching. As such, it _might_ make sense to make this behavior configurable, since we may sometimes want it and sometimes not.