Arjan van de Ven wrote:
Ric Wheeler wrote:
Any thoughts about what the right semantics are for properly doing a
forced unmount and how whether it is doable near term (as opposed to
the more strategic/long term issues laid out in this thread) ?
I would like to ask you take one step back; in the past when I have seen
people want "forced unmount" they wanted instead somethings else that
they
thought (at that point incorrectly) forced unmount would solve.
there's a few things an unmount does
1) detach from the namespace (tree)
2) shut down the filesystem to
2a) allow someone else to mount/fsck/etc it
2b) finish stuff up and put it in a known state (clean)
3) shut down IO to a fs for another node to take over
(which is what the "incorrectly" is about technically)
We have 20-30 100GB file systems on a single box (to avoid the long fsck
time). When you hit an issue with one file system, say a panic, or a
set of file systems (dead drive) that might take out 5 file systems, we
want to be able to keep the box up since it is still serving up
something like 2.5TB of storage to the user ;-)
So what I want is all of the following:
(1) do your 2a - be able to fsck and repair corruptions and then
hopefully remount that file system without a reboot of the box.
(2) leave all other file systems (including those of the same type)
running without error (good fault isolation).
(3) I don't want to try and clean up that file system state - error
out any existing IO's, perfectly fine to have processes using get blown
away. In effect, treat it (from the file system point of view) just
like you would a power outage & reboot. You should replay the journal &
recover only after you get the disk back.
(4) make sure that a hung disk or panic'ed file system does not
prevent an intentional reboot.
In a conversation about this with Trond, I think that he has some other
specific motivations from an NFS point of view as well.
ric
ric
-
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html