Hi, We are running a raid1 on top of two raid0's. In this specific case we cannot use raid10 (different device sizes etc). Upon booting, the raid1 is always started degraded, with one of the raid0's missing. The log says the missing raid could not be found. However, upon booting /proc/mdstat lists both the two raid0s OK. I guess either the raids are assembled in wrong order (no mutual dependencies considered), or without letting the previously assembled device to "settle down". I am wondering what would be the proper way to fix this issue. The raids are huge (over 1TB each) and recovery takes many hours. Our mdadm.conf lists the raids in proper order, corresponding to their dependency. Thanks a lot for any help or suggestions. Best regards, Pavel. -- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html