Re: topics for the file system mini-summit

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



"Stephen C. Tweedie" <sct@xxxxxxxxxx> writes:
 
> The inclusion of checksums would certainly allow us to harden things.
> In the above scenario, failure of the checksum test would allow us to
> discard corrupt indirect blocks before we could allow any harm to come
> to other disk blocks.  But that only works for cases where the checksum
> notices the problem; if we're talking about possible OS bugs, memory
> corruption etc. then it is quite possible to get corruption in the in-
> memory copy, which gets properly checksummed and written to disk, so you
> can't rely on that catching all cases.

I don't think you'll ever get a good solution for random kernel memory
corruption - if that happens you are dead no matter what you do. Even
if your file system still works then your application will eventually
produce garbage when its own data gets corrupted.

Limiting detection to on storage corruption is entirely reasonable.

And also handling 100% of all cases is not feasible anyways. Just handling
more than currently would be already a big step forward.

"The perfect is the enemy of the good"

-Andi

-
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]
  Powered by Linux