Jan Kara wrote:
armangau_philippe@xxxxxxx wrote:
Hi all,
We are building a new system which is going to use ext3 FS. We would like
to know more about the behavior of ext3 in the case of failure. But
before I procede, I would like to share more information about our future
system.
* Our application always does an fsync on files
* When symbolic links (more specifically fast symlink) are created,
the host directory is also fsync'ed. * Our application is also
going to front an EMC disk array configured using RAID5 or RAID6.
* We will be using multipathing so that we can assume that no disk
errors will be reported.
In this context , we would like to know the following for recovery after a
power outage:
1. When will an fsck have to be run (not counting the scheduled fsck
every N-mounts)?
2. In the case of a crash, are the fsync-ed file contents and symbolic
links safe no matter what?
Thanks,
This is an interesting twist on some of the discussion that we have had
at the recent workshop and in other forums on hardening file system in
order to prevent the need to fsck.
The twist is that we have a disk that will not lose power without being
able to write to platter all of the data that has been sent - this is
the case for most mid-range or higher disk arrays.
If the application can precisely use fsync() on files, directories and
symlinks, it wants to know that all objects are safe on disk that have
completed a successful fsync. It also wants to know that the file system
will not need any recovery beyond replaying transactions after a power
outage/reboot - simply mount, let the transactions get replayed and you
should be good to go without the fsck.
The hard part of the question is to understand when and how often we
will fail to deliver this easy case. Also, does any of the hardening in
ext4 help here.
I'm probably misunderstanding something because the answer seems to be
too obvious to me :) But anyway I'll write it so that you can correct
me:
Due to journalling guarantees you should get consistent FS whenever
you replay the log (unless there are some software bugs or hardware
problems which is why fsck is run once per several mounts anyway).
If you fsync() your data, you are guaranteed that also your data are
safely on disk when fsync returns. So what is the question here?
Honza
I think that the real question here is in practice, how often does this really
hold to be true? When it fails, how long does it take to recover the file system?
There are a lot of odd errors that can happen when you monitor a large enough
number of file systems. In my experience, I would guess that disk errors are
clearly the leading cause of issues, followed by software bugs (file system,
firmware, etc) and then a group of errors caused by various occasional things
(bad DRAM in the server/HBA/disk, bad cables/etc). Note that using a high end
array does not eliminate errors, it just reduces the rate (hopefully by a large
amount).
What is really hard to predict is the rate of the failures that require fsck
with our current file system (say for a specific hardware setup) and how changes
like the checksumming in ext4 can help us ride through these errors without
needing a full fsck.
This rate has a direct impact on how much pain an fsck will inflict and how
important redundancy is to avoid having the file system be a single point of
failure.
ric
-
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html