Hello, I crashed my raid5 device, after rebooting there where only 4 of the 5 disks available and by rescan the controller I lost a second drive :-( On the next rescan all drives where there but the raid will now not be able to come up. I'm sure there was no writing on the device while drives are lost. after the reboot: /dev/md2: Version : 1.2 Creation Time : Sun May 13 17:31:22 2018 Raid Level : raid5 Array Size : 39048429568 (37239.48 GiB 39985.59 GB) Used Dev Size : 9762107392 (9309.87 GiB 9996.40 GB) Raid Devices : 5 Total Devices : 4 Persistence : Superblock is persistent Intent Bitmap : Internal Update Time : Wed Dec 12 17:54:18 2018 State : clean, degraded Active Devices : 4 Working Devices : 4 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 512K Consistency Policy : bitmap Name : inet:2 (local to host inet) UUID : 7c3e5def:31def6df:2ae53e66:e920a763 Events : 133352 Number Major Minor RaidDevice State 0 8 161 0 active sync /dev/sdk1 1 8 145 1 active sync /dev/sdj1 - 0 0 2 removed 4 8 177 3 active sync /dev/sdl1 5 8 129 4 active sync /dev/sdi1 the disk which is failing is /dev/sdm1 my problem is that I have now two drives where the Role ist "spare" :-( ~ # mdadm --examine /dev/sd[ijklm]1 /dev/sdi1: Magic : a92b4efc Version : 1.2 Feature Map : 0x1 Array UUID : 7c3e5def:31def6df:2ae53e66:e920a763 Name : inet:2 (local to host inet) Creation Time : Sun May 13 17:31:22 2018 Raid Level : raid5 Raid Devices : 5 Avail Dev Size : 19524214784 (9309.87 GiB 9996.40 GB) Array Size : 39048429568 (37239.48 GiB 39985.59 GB) Data Offset : 264192 sectors Super Offset : 8 sectors Unused Space : before=264112 sectors, after=0 sectors State : active Device UUID : 8c1db850:62b6fbc1:4c135b6a:b2857289 Internal Bitmap : 8 sectors from superblock Update Time : Wed Dec 12 18:17:48 2018 Bad Block Log : 512 entries available at offset 48 sectors Checksum : ba8ac3a7 - correct Events : 165464 Layout : left-symmetric Chunk Size : 512K Device Role : Active device 4 Array State : .A.AA ('A' == active, '.' == missing, 'R' == replacing) /dev/sdj1: Magic : a92b4efc Version : 1.2 Feature Map : 0x1 Array UUID : 7c3e5def:31def6df:2ae53e66:e920a763 Name : inet:2 (local to host inet) Creation Time : Sun May 13 17:31:22 2018 Raid Level : raid5 Raid Devices : 5 Avail Dev Size : 19524214784 (9309.87 GiB 9996.40 GB) Array Size : 39048429568 (37239.48 GiB 39985.59 GB) Data Offset : 264192 sectors Super Offset : 8 sectors Unused Space : before=264112 sectors, after=0 sectors State : clean Device UUID : e05e3647:dd266dbf:1369dc6f:662b1776 Internal Bitmap : 8 sectors from superblock Update Time : Wed Dec 12 18:17:48 2018 Bad Block Log : 512 entries available at offset 48 sectors Checksum : a0a070ed - correct Events : 165464 Layout : left-symmetric Chunk Size : 512K Device Role : Active device 1 Array State : .A.AA ('A' == active, '.' == missing, 'R' == replacing) /dev/sdk1: Magic : a92b4efc Version : 1.2 Feature Map : 0x9 Array UUID : 7c3e5def:31def6df:2ae53e66:e920a763 Name : inet:2 (local to host inet) Creation Time : Sun May 13 17:31:22 2018 Raid Level : raid5 Raid Devices : 5 Avail Dev Size : 19524214784 (9309.87 GiB 9996.40 GB) Array Size : 39048429568 (37239.48 GiB 39985.59 GB) Data Offset : 264192 sectors Super Offset : 8 sectors Unused Space : before=264112 sectors, after=0 sectors State : active Device UUID : e8636a07:d2e55abb:044263c8:34c4b9b3 Internal Bitmap : 8 sectors from superblock Update Time : Wed Dec 12 18:17:48 2018 Bad Block Log : 512 entries available at offset 48 sectors - bad blocks present. Checksum : f2eba6b0 - correct Events : 165464 Layout : left-symmetric Chunk Size : 512K Device Role : spare Array State : .A.AA ('A' == active, '.' == missing, 'R' == replacing) /dev/sdl1: Magic : a92b4efc Version : 1.2 Feature Map : 0x1 Array UUID : 7c3e5def:31def6df:2ae53e66:e920a763 Name : inet:2 (local to host inet) Creation Time : Sun May 13 17:31:22 2018 Raid Level : raid5 Raid Devices : 5 Avail Dev Size : 19524214784 (9309.87 GiB 9996.40 GB) Array Size : 39048429568 (37239.48 GiB 39985.59 GB) Data Offset : 264192 sectors Super Offset : 8 sectors Unused Space : before=264112 sectors, after=0 sectors State : active Device UUID : 46f5ee93:d261cf2c:6f335cb5:98f93fa1 Internal Bitmap : 8 sectors from superblock Update Time : Wed Dec 12 18:17:48 2018 Bad Block Log : 512 entries available at offset 48 sectors Checksum : cb63dad9 - correct Events : 165464 Layout : left-symmetric Chunk Size : 512K Device Role : Active device 3 Array State : .A.AA ('A' == active, '.' == missing, 'R' == replacing) /dev/sdm1: Magic : a92b4efc Version : 1.2 Feature Map : 0x9 Array UUID : 7c3e5def:31def6df:2ae53e66:e920a763 Name : inet:2 (local to host inet) Creation Time : Sun May 13 17:31:22 2018 Raid Level : raid5 Raid Devices : 5 Avail Dev Size : 19524214784 (9309.87 GiB 9996.40 GB) Array Size : 39048429568 (37239.48 GiB 39985.59 GB) Data Offset : 264192 sectors Super Offset : 8 sectors Unused Space : before=264112 sectors, after=0 sectors State : active Device UUID : 894b94b5:595e4366:e43f072a:681b6d3e Internal Bitmap : 8 sectors from superblock Update Time : Wed Dec 12 18:17:48 2018 Bad Block Log : 512 entries available at offset 48 sectors - bad blocks present. Checksum : 38555bee - correct Events : 165464 Layout : left-symmetric Chunk Size : 512K Device Role : spare Array State : .A.AA ('A' == active, '.' == missing, 'R' == replacing) ~ # mdadm --examine /dev/sd[ijklm]1 | grep Events Events : 165464 Events : 165464 Events : 165464 Events : 165464 Events : 165464 looks fine, I think, the problem is that two dives are marked as spare when I do: ~ # mdadm --assemble --force --run /dev/md2 /dev/sd[ijklm]1 mdadm: failed to RUN_ARRAY /dev/md2: Input/output error mdadm: Not enough devices to start the array. and mdadm -D saw a raid0: ~ # mdadm -D /dev/md2 /dev/md2: Version : 1.2 Raid Level : raid0 Total Devices : 5 Persistence : Superblock is persistent State : inactive Working Devices : 5 Name : inet:2 (local to host inet) UUID : 7c3e5def:31def6df:2ae53e66:e920a763 Events : 165464 Number Major Minor RaidDevice - 8 193 - /dev/sdm1 - 8 177 - /dev/sdl1 - 8 161 - /dev/sdk1 - 8 145 - /dev/sdj1 - 8 129 - /dev/sdi1 my idea is now to do: mdadm --create --assume-clean --level=5 --raid-devices=5 /dev/md2 /dev/sdk1 /dev/sdj1 /dev/sdm1 /dev/sdl1 /dev/sdi1 but all says it is better to ask here before do this. Gruß Alex