On 02/05/11 18:38, Jameson Graef Rollins wrote:
On Mon, 02 May 2011 11:11:25 +0200, David Brown<david@xxxxxxxxxxxxxxx> wrote:
This is not directly related to your issues here, but it is possible to
make a 1-disk raid1 set so that you are not normally degraded. When you
want to do the backup, you can grow the raid1 set with the usb disk,
want for the resync, then fail it and remove it, then "grow" the raid1
back to 1 disk. That way you don't feel you are always living in a
degraded state.
Hi, David. I appreciate the concern, but I am not at all concerned
about "living in a degraded state". I'm far more concerned about data
loss and the fact that this bug has seemingly revealed that some
commonly held assumptions and uses of software raid are wrong, with
potentially far-reaching affects.
I also don't see how the setup you're describing will avoid this bug.
If this bug is triggered by having a layer between md and the filesystem
and then changing the raid configuration by adding or removing a disk,
then I don't see how there's a difference between hot-adding to a
degraded array and growing a single-disk raid1. In fact, I would
suspect that your suggestion would be more problematic because it
involves *two* raid reconfigurations (grow and then shrink) rather than
one (hot-add) to achieve the same result. I imagine that each raid
reconfiguration could potentially triggering the bug. But I still don't
have a clear understanding of what is going on here to be sure.
I didn't mean to suggest this as a way around these issues - I was just
making a side point. Like you and others in this thread, I am concerned
about failures that could be caused by having the sort of layered and
non-homogeneous raid you describe.
I merely mentioned single-disk raid1 "mirrors" as an interesting feature
you can get with md raid. Many people don't like to have their system
in a continuous error state - it can make it harder to notice when you
have a /real/ problem. And single-disk "mirrors" gives you the same
features, but no "degraded" state.
As you say, it is conceivable that adding or removing disks to the raid
could make matters worse.
From what I have read so far, it looks like you can get around problems
here if the usb disk is attached when the block layers are built up
(i.e., when the dm-crypt is activated, and the lvm and filesystems on
top of it). It should then be safe to remove it, and re-attach it
later. Of course, it's hardly ideal to have to attach your backup
device every time you boot the machine!
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html