> If we go back to first principles, what do we want to do? We want the > system administrator to know that a file might be potentially > corrupted. And perhaps, if a program tries to read from that file, it > should get an error. If we have a program that has that file mmap'ed > at the time of the error, perhaps we should kill the program with some > kind of signal. But to force a reboot of the entire system? Or to > remounte the file system read-only? That seems to be completely > disproportionate for what might be 2 or 3 bits getting flipped in a > page cache for a file. I think that we know that the file *is* corrupted, not just "potentially". We probably know the location of the corruption to cache-line granularity. Perhaps better on systems where we have access to ecc syndrome bits, perhaps worse ... we do have some errors where the low bits of the address are not known. I'm in total agreement that forcing a reboot or fsck is unhelpful here. But what should we do? We don't want to let the error be propagated. That could cause a cascade of more failures as applications make bad decisions based on the corrupted data. Perhaps we could ask the filesystem to move the file to a top-level "corrupted" directory (analogous to "lost+found") with some attached metadata to help recovery tools know where the file came from, and the range of corrupted bytes in the file? We'd also need to invalidate existing open file descriptors (or less damaging - flag them to avoid the corrupted area??). Whatever we do, it needs to be persistent across a reboot ... the lost bits are not going to magically heal themselves. We already have code to send SIGBUS to applications that have the corrupted page mmap(2)'d (see mm/memory-failure.c). Other ideas? -Tony -- To unsubscribe from this list: send the line "unsubscribe linux-ext4" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html