On Fri, 2010-12-03 at 18:51 -0600, Jonathan Nieder wrote: > Hi Greg, > > ghazel@xxxxxxxxx wrote: > > > I've encountered a strange issue where "git reset --hard" insists on > > "Checking out files ..." when all that is changed is the ctime > > There is a performance trade-off. Refreshing the index requires > reading+hashing the existing file if the stat information changed; > this could be faster or slower than blindly overwriting depending on > the situation. > > My deploy process (capistrano) maintains a cached copy of > > a git repo, which it fetches, resets, and then hardlinks files from > > when a deploy occurs ( https://github.com/37signals/fast_remote_cache > > ). The hardlinking step is meant to save the time of copying the file. > > but hardlinking changes the ctime of the source files. > > Interesting. Setting "[core] trustctime = false" in the repository > configuration could be a good solution (no performance downside I can > think of). It is worth noting that many file-based backup systems which do "online" backups (such as in use where I work) restore the atime by default at the expense of the ctime (logic being that the atime may have had value and the ctime changes either way--which may or may not be true) on unix style filesystems. While many of the git command-line things I have run seem to figure this out ok, it drives gitk nuts. As far as I am concerned this is a small price to pay for a solid daily-updated backup of my machine(s) to be available. I haven't yet put "git reset" of any sort to use (obviously I just haven't been breaking enough things yet), but I suspect that it would react in a similar way. -- -Drew Northup N1XIM AKA RvnPhnx on OPN ________________________________________________ "As opposed to vegetable or mineral error?" -John Pescatore, SANS NewsBites Vol. 12 Num. 59 -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html