Derrick Stolee <stolee@xxxxxxxxx> writes: >>>> + for (i = 0; i < rename_dst_nr; i++) { >>>> + if (rename_dst[i].pair) >>>> + continue; /* already found exact match */ >>>> + add_if_missing(options->repo, &to_fetch, rename_dst[i].two); >>> >>> Could this be reversed instead to avoid the "continue"? >> >> Hmm...I prefer the "return early" approach, but can change it if others >> prefer to avoid the "continue" here. > > The "return early" approach is great and makes sense unless there is > only one line of code happening in the other case. Not sure if there > is any potential that the non-continue case grows in size or not. > > Doesn't hurt that much to have the "return early" approach, as you > wrote it. Even with just one statement after the continue, in this particular case, the logic seems to flow a bit more naturally. "Let's see each item in this list. ah, this has already been processed so let's move on. otherwise, we may need to do something a bit more." It also saves one indentation level for the logic that matters ;-) >>>> + for (i = 0; i < rename_src_nr; i++) >>>> + add_if_missing(options->repo, &to_fetch, rename_src[i].p->one); >>> >>> Does this not have the equivalent "rename_src[i].pair" logic for exact >>> matches? One source blob can be copied to multiple destination path, with and without modification, but we currently do not detect the case where a destination blob is a concatenation of two source blobs. So we can optimize the destination side ("we are done with it, no need to look---we won't find anything better anyway as we've found the exact copy source") but we cannot do the same optimization on the source side ("yes, this one was copied to path A, but path B may have a copy with slight modification), I would think.