On Tue, Jun 23, 2015 at 02:57:23PM -0700, Stefan Beller wrote: > Linus Torvalds started a discussion[1] if we want to play rather safe > than use defaults which make sense only for the most power users of Git: > > > So git is "safe" in the sense that you won't really lose any data, > > but you may well be inconvenienced. The "fsync each object" config > > option is there in case you don't want that inconvenience, but it > > should be noted that it can make for a hell of a performance impact. > > > Of course, it might well be the case that the actual default > > might be worth turning around. Most git users probably don't > > care about that kind of "apply two hundred patches from Andrew > > Morton" kind of workload, although "rebase a big patch-series" > > does end up doing basically the same thing, and might be more > > common. > > This patch enables fsync_object_files by default. If you are looking for safety out of the box, I think this falls far short, as we do not fsync all of the other files. For instance, we do not fsync refs before they are written (nor anything else that uses the commit_lock_file() interface to rename, such as the index). We do always fsync packfiles and their indices. I had always assumed this was fine on ext4 with data=ordered (i.e., either the rename and its pointed-to content will go through, or not; so you either get your update or the old state, but not a garbage or empty file). But it sounds from what Ted wrote in: http://article.gmane.org/gmane.linux.file-systems/97255 that this may not be the case. If it's not, I think we should consider fsyncing ref writes. -Peff -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html