Derrick Stolee <dstolee@xxxxxxxxxxxxx> writes: > + if ((*list)->generation != GENERATION_NUMBER_INFINITY) { > + if ((*list)->generation > GENERATION_NUMBER_MAX) > + die("generation number %u is too large to store in commit-graph", > + (*list)->generation); > + packedDate[0] |= htonl((*list)->generation << 2); > + } How serious do we want this feature to be? On one extreme, we could be irresponsible and say it will be a problem for our descendants in the future if their repositories have more than billion pearls on a single strand, and the above certainly is a reasonable way to punt. Those who actually encounter the problem will notice by Git dying somewhere rather deep in the callchain. Or we could say Git actually does support a history that is arbitrarily long, even though such a deep portion of history will not benefit from having generation numbers in commit-graph. I've been assuming that our stance is the latter and that is why I made noises about overflowing 30-bit generation field in my review of the previous step. In case we want to do the "we know this is very large, but we do not know the exact value", we may actually want a mode where we can pretend that GENERATION_NUMBER_MAX is set to quite low (say 256) and make sure that the code to handle overflow behaves sensibly. > + for (i = 0; i < nr_commits; i++) { > + if (commits[i]->generation != GENERATION_NUMBER_INFINITY && > + commits[i]->generation != GENERATION_NUMBER_ZERO) > + continue; > + > + commit_list_insert(commits[i], &list); > + while (list) { > +... > + } > + } So we go over the list of commits just _once_ and make sure each of them gets the generation assigned correctly by (conceptually recursively but iteratively in implementation by using a commit list) making sure that all its parents have generation assigned and compute the generation for the commit, before moving to the next one. Which sounds correct.