Hi Greg, ghazel@xxxxxxxxx wrote: > I've encountered a strange issue where "git reset --hard" insists on > "Checking out files ..." when all that is changed is the ctime There is a performance trade-off. Refreshing the index requires reading+hashing the existing file if the stat information changed; this could be faster or slower than blindly overwriting depending on the situation. That said, I have no strong objection to an implicit refresh in "git reset" (performance-sensitive scripts should be using read-tree directly anyway). Have you tried making that change to builtin/reset.c? How does it perform in practice? > My deploy process (capistrano) maintains a cached copy of > a git repo, which it fetches, resets, and then hardlinks files from > when a deploy occurs ( https://github.com/37signals/fast_remote_cache > ). The hardlinking step is meant to save the time of copying the file. > but hardlinking changes the ctime of the source files. Interesting. Setting "[core] trustctime = false" in the repository configuration could be a good solution (no performance downside I can think of). Hope that helps, Jonathan -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html