On 10/25/2010 09:43 PM, Eric Sandeen wrote: > > Now, extN has this feature of recording fs errors in the superblock, > but I'm not sure we distinguish between "errors which require a fsck" > and others? That is definitely a good question - is it right to set a generic error flag, if 'only' I/O errors came up? The problem is that the error flag comes from ext4_error() and ext4_abort(), which are all over the code and which do not make any difference if it just an IO error or real filesystem issue. > > Anyway your characterization of xfs is wrong, IMHO, it's: > > Mount (possibly replaying the journal) because all should be well, > we have faith in our hardware and our software. > If during runtime the fs encounters a severe metadata error, it will > shut down, and this is your cue to unmount and run xfs_repair, then > remount. Doesn't seem backwards to me. ;) Requiring that fsck > prior to the first mount makes no sense for a journaling fs. > > However, Bernd's issue is probably an issue in general with XFS > as well (which doesn't record error state on-disk) - how to quickly > know whether the filesystem you're about to mount in a cluster has > a -known- integrity issue from a previous mount and really does > require a fsck. > > For XFS, you have to have monitored the previous mount, I guess, > and watched for any errors the kernel threw when it encountered them. It really would be helpful, if filesystems would provide a health file as Lustre does. A generic VFS proc/sys file or IOCTL would be helpful, to have a generic interface. I probably should write a patch for it ;) Cheers, Bernd
Attachment:
signature.asc
Description: OpenPGP digital signature