> From: Derrick Stolee <dstolee@xxxxxxxxxxxxx> > > When working with very large repositories, an incremental 'git fetch' > command can download a large amount of data. If there are many other > users pushing to a common repo, then this data can rival the initial > pack-file size of a 'git clone' of a medium-size repo. > > Users may want to keep the data on their local repos as close as > possible to the data on the remote repos by fetching periodically in > the background. This can break up a large daily fetch into several > smaller hourly fetches. > > The task is called "prefetch" because it is work done in advance > of a foreground fetch to make that 'git fetch' command much faster. > > However, if we simply ran 'git fetch <remote>' in the background, > then the user running a foregroudn 'git fetch <remote>' would lose -> foreground I have some more minor comments that I will send as individual replies, but overall, the patch set looks good to me. > +static int append_remote(struct remote *remote, void *cbdata) > +{ > + struct string_list *remotes = (struct string_list *)cbdata; > + > + string_list_append(remotes, remote->name); > + return 0; > +} > + > +static int maintenance_task_prefetch(struct maintenance_run_opts *opts) > +{ > + int result = 0; > + struct string_list_item *item; > + struct string_list remotes = STRING_LIST_INIT_DUP; > + > + if (for_each_remote(append_remote, &remotes)) { > + error(_("failed to fill remotes")); > + result = 1; > + goto cleanup; > + } > + > + for_each_string_list_item(item, &remotes) > + result |= fetch_remote(item->string, opts); > + > +cleanup: > + string_list_clear(&remotes, 0); > + return result; > +} I was wondering why the generation of the list and the iteration was split up, but I see that you want to attempt to fetch each remote even if one of them fails.