I have a software-RAID1 that seems to be working correctly: # cat /proc/mdstat Personalities : [raid1] md0 : active raid1 sdc1[1] sdb1[0] 976629568 blocks super 1.2 [2/2] [UU] unused devices: <none> # mdadm --detail /dev/md0 /dev/md0: Version : 1.2 Creation Time : Sun May 17 15:21:30 2015 Raid Level : raid1 Array Size : 976629568 (931.39 GiB 1000.07 GB) Used Dev Size : 976629568 (931.39 GiB 1000.07 GB) Raid Devices : 2 Total Devices : 2 Persistence : Superblock is persistent Update Time : Mon May 18 10:28:36 2015 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Name : eprb21:0 (local to host eprb21) UUID : 0901fe50:444a29b6:d3caff14:e45ef9cc Events : 19 Number Major Minor RaidDevice State 0 8 17 0 active sync /dev/sdb1 1 8 33 1 active sync /dev/sdc1 But, on the other hand, when the system boots, I briefly see the following messages: doing fast boot Creating device nodes with udev udevd[174]: failed to execute ‘/sbin/mdadm’ ‘/sbin/mdadm --incremental /dev/sdb1 udevd[175]: failed to execute ‘/sbin/mdadm’ ‘/sbin/mdadm --incremental /dev/sdc1 --offroot’: No such file or directory But otherwise the system appears to run normally. After booting, /dev/md0 seems to be working correctly. What does it mean, and should I worry about it? What can I do about it? My system is openSUSE 12.2. Thanks a lot, Hans Malissa-- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html