Duy Nguyen <pclouds@xxxxxxxxx> writes: > > - for (i = 0; i < state->istate->cache_nr; i++) { > + for (i = 0; i < trust_ino && state->istate->cache_nr; i++) { There is some typo here, but modulo that this looks like the right thing to do. > @@ -419,10 +419,24 @@ static void mark_colliding_entries(const struct checkout *state, > if (dup->ce_flags & (CE_MATCHED | CE_VALID | CE_SKIP_WORKTREE)) > continue; > > - if ((trust_ino && dup->ce_stat_data.sd_ino == st->st_ino) || > - (!trust_ino && !fspathcmp(ce->name, dup->name))) { > + if (dup->ce_stat_data.sd_ino == (unsigned int)st->st_ino) { This is slightly unfortunate but is the best we can do for now. The reason why the design of the "cached stat info" mechanism allows the sd_* fields to be narrower than the underlying fields is because they are used only as an early-culling measure (if the value saved with truncation is different from the current value with truncation, then they cannot possibly be the same, so we know that the file changed without looking at the contents). This use however is different. Equality of truncated values immediately declare CE_MATCHED here, producing false negative, which is not what we want, no? > dup->ce_flags |= CE_MATCHED; > + return; > + } > + }