Raid auto-assembly upon boot - device order

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

We are running a raid1 on top of two raid0's. In this specific case we
cannot use raid10 (different device sizes etc). Upon booting, the raid1
is always started degraded, with one of the raid0's missing. The log
says the missing raid could not be found. However, upon booting
/proc/mdstat lists both the two raid0s OK.

I guess either the raids are assembled in wrong order (no mutual
dependencies considered), or without letting the previously assembled
device to "settle down". I am wondering what would be the proper way to
fix this issue. The raids are huge (over 1TB each) and recovery takes
many hours.

Our mdadm.conf lists the raids in proper order, corresponding to their
dependency.

Thanks a lot for any help or suggestions.

Best regards,

Pavel.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux