On 01/06/2015 04:37 PM, Gary Greene wrote:
This has been discussed to death on various lists, including the
LKML...
Almost every controller and drive out there now lies about what is
and isn’t flushed to disk, making it nigh on impossible for the
Kernel to reliably know 100% of the time that the data HAS been
flushed to disk. This is part of the reason why it is always a Good
Idea™ to have some sort of pause in the shut down to ensure that it
IS flushed.
That's pretty much entirely irrelevant to the original question.
(Feel free to correct me if I'm wrong in the following)
A filesystem has three states: Clean, Dirty, and Dirty with errors.
When a filesystem is unmounted, the cache is flushed and it is marked
clean last. This is the expected state when a filesystem is mounted.
Once a filesystem is mounted read/write, then it is marked dirty. If a
filesystem is dirty when it is mounted, then it wasn't unmounted
properly. In the case of a journaled filesystem, typically the journal
will be replayed and the filesystem will then be mounted.
The last case, dirty with errors indicates that the kernel found invalid
data while the filesystem was mounted, and recorded that fact in the
filesystem metadata. This will normally be the only condition that will
force an fsck on boot. It will also normally result in logs being
generated when the errors are encountered. If your filesystems are
force-checked on boot, then the logs should usually tell you why. It's
not a matter of a timeout or some device not flushing its cache.
Of course, the other possibility is simply that you've formatted your
own filesystems, and they have a maximum mount count or a check
interval. Use 'tune2fs -l' to check those two values. If either of
them are set, then there is no problem with your system. It is behaving
as designed, and forcing a periodic check because that is the default
behavior.
_______________________________________________
CentOS mailing list
CentOS@xxxxxxxxxx
http://lists.centos.org/mailman/listinfo/centos