On Fri, Dec 3, 2010 at 4:51 PM, Jonathan Nieder <jrnieder@xxxxxxxxx> wrote: > ghazel@xxxxxxxxx wrote: > >> I've encountered a strange issue where "git reset --hard" insists on >> "Checking out files ..." when all that is changed is the ctime > > There is a performance trade-off. ÂRefreshing the index requires > reading+hashing the existing file if the stat information changed; > this could be faster or slower than blindly overwriting depending on > the situation. > > That said, I have no strong objection to an implicit refresh in "git > reset" (performance-sensitive scripts should be using read-tree > directly anyway). ÂHave you tried making that change to > builtin/reset.c? ÂHow does it perform in practice? I did not make the modifications to reset.c, I just ran the refresh before reset: So originally: $ time git reset --hard <rev> Checking out files: 100% (2772/2772), done. real 0m5.328s user 0m2.539s sys 0m2.542s as opposed to: $ time git update-index --refresh real 0m1.236s user 0m1.024s sys 0m0.201s $ time git reset --hard <rev> real 0m0.055s user 0m0.011s sys 0m0.041s >> Â Â Â Â Â Â ÂMy deploy process (capistrano) maintains a cached copy of >> a git repo, which it fetches, resets, and then hardlinks files from >> when a deploy occurs ( https://github.com/37signals/fast_remote_cache >> ). The hardlinking step is meant to save the time of copying the file. >> but hardlinking changes the ctime of the source files. > > Interesting. ÂSetting "[core] trustctime = false" in the repository > configuration could be a good solution (no performance downside I can > think of). This is a very useful suggestion. I do not see a case where ctime would be valuable to me. Is it really valuable to other people? What is the trade-off? -Greg -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html