Re: Two disk RAID10 inactive on boot if partition is missing

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 17 May 2016 at 14:31, Phil Turmel <philip@xxxxxxxxxx> wrote:
> See the "Unclean Shutdown" section of "man md".
>
> The kernel parameter you need is "md_mod.start_dirty_degraded=1".
>
> Doing this is a really good way to end up with split brain.  Why do you
> need to regularly boot without the devices that were present at shutdown?
Thanks for the md reference.

I don't need to regularly boot with devices that were present at
shutdown, but this is RAID! Given that it's not a boot device, and
frankly even if it was, I would expect the default to be to start up
anyway and then to be able to replace the missing device to be an
option. Otherwise it removes a lot of the point of resilience.

I'll have a look at this further later tonight. The reason it is
failing from '-v' is because /dev/sda12 is 'busy'. Looks like I need
to read the docs further, I got it to come back up by using stop and
assemble, but yes looks like split brain is a thing when I do that.
Fortunately I'm not using this for anything important yet - I'm doing
this precisely so that I know what to do when a device does fail. Once
it's all working I'm going to leave it alone.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux