On Wed, 28 Feb 2007, Alexander Litvinov wrote: > > > - replace any broken and/or missing objects > > > > This is the challenging part. Sometimes (hopefully often!) you can find > > the missing objects in other copies of the repositories. At other > > times, you may need to try to find the data some other way (for > > example, maybe your checked-out copy contains the file content that > > when hashed will be the missing object?). > > Thanks for answer. I have found this blob in cloned repo. I just copy it into > objects subdir and repack repo again. fsck works without any errors. Good to hear. It would probably be good to - try to figure out why things got corrupted in the first place. In particular, we should probably check our (my) assumption that the file rename on cygwin is atomic. Your comment that you use ^C a lot makes me worry that something basically caused an incomplete write or other thing to happen.. To be more precise, git will actually start off trying to do a "link + unlink" pair, because that is the safest thing to do on a UNIX filesystem: if the linked target already exists, we won't overwrite it (and the git data consistency rules have always been: honor existing data over new one, and *never* change anything that has already been written). But I would not be shocked to hear that the "link + unlink" sequence ends up being emulated under cygwin as a "copy + delete" due to lack of hardlinks or something. Also, even if the link fails, and git then falls back to "rename()" (since some filesystems don't do hardlinks at all, or limit them to one particular directory), I would _still_ not be totally surprised if the rename got emulated as a copy/delete for some strange Windows reason. There are other possibilities for corruption, of course: just plain disk corruption, or (again) some other subtle cygwin emulation or Windows issue could bite us. - Even under UNIX, I'm not entirely sure about http/ftp/rsync transfers. rsync in particular doesn't check anything at all, but last I looked, the http fetcher was also doing things like checking the integrity of the object *after* it had already moved it to its final resting place (which is again unsafe with ^C). In general, I strongly suggest that people use the "native git" pack-transfers. The "dumb protocol" transfers are called "dumb" for a reason.. - It would probably be good to write up the "How to recover" thing, regardless of why any corruption happens. It doesn't matter if you're under UNIX and using native protocols, and just being careful as hell: disks get corrupted, sh*t happens, alpha-particles in the wrong place do bad things to memory cells. And bugs _are_ inevitable, even if we've been pretty damn good about these things. So it's important for people to know what the limits on corruption are, and tell people that regardless of how stable the git data structures are, if you care about your data, you need to have things in multiple places (and no, RAID is _not_ the answer either, even if it can be a small _part_ of doing things well). Anybody? Linus - To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html