On Thu, 29 May 2008, Linus Torvalds wrote: > > So if you have a system crash at a really bad time, you may have a git > repository that needs manual intervention to actually be *usable*. I hope > nobody ever believed anything else. That manual intervention may be things > like: > ... > - actually throw away broken commits, and re-create them (ie basically > doing a "git reset <known-good-state>" plus re-committing the working > tree or perhaps re-doing a whole "git am" series or something) The important part here is that it's only the *new* state that can be this kind of "broken commits". In other words, you'd never have to re-do actual *old* commits, just the commits you were doing as things crashed - the commits that you were in the middle of doing, and still have the data for. Example from my case: I may have series of 250+ commits that I create with "git am" when I sync up with Andrew, and I very much want the speed of being able to create all that new commit data without ever even causing a _single_ synchronous disk write. So if the machine were to crash in the middle of the series, I might lose all of that data, but I still have my mailbox, so I'd just need to reset to the point before I even started the "git am", and re-do the whole series. My actual *base* repository objects would never get corrupted. [ And one final notice: I don't know about others, but I've actually had more corruption from disks going bad etc that from system crashes per se. And when *that* happens, old data is obviously as easily gone as new data is. So absolutely _nothing_ replaces backups. It doesn't matter if you do a "fsync()" after every single byte write - a disk crash can and will corrupt things that were "stable". So even "stable storage" is very much unstable in the end. ] Linus -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html