On Thu, Dec 08 2022, Junio C Hamano wrote: > Protecting files from bit flipping filesystem corruption is a > different matter. Folks at hosting sites like GitHub would know how > often they detect object corruption (I presume they do not have to > deal with the index file on the server end that often, but loose and > pack object files have the trailing checksums the same way) thanks > to the trailing checksum, and what the consequences are if we lost > that safety (I am guessing it would be minimum, though). I don't think this checksum does much for us in practice, but just on this point in general: Extrapolating results at <hosting site> when it comes to making general decisions about git's data safety isn't a good idea. I don't know about GitHub's hardware, but servers almost universally use ECC ram, and tend to use things like error-correcting filesystem RAID etc. Data in that area is really interesting when it comes to running git in that sort of setup, but it really shouldn't be extrapolated to git's userbase in general. A lot of those users will be using cheap memory and/or storage devices without any error correction. They're also likely to stress our reliability guarantees in other ways, e.g. yanking their power cord (or equivalent), which a server typically won't need to deal with.