Hi, all -- I recently migrated our disk farm to a new box with a new OS build (openSuSE from KNOPPIX). Aside from the usual challenges of setting up the world again, I have a 3-device RAID5 volume that won't start. The other metadevice is fine, though; I think we can say that the md system is running. Soooooo ... Where do I start? diskfarm:~ # cat /proc/mdstat Personalities : [raid6] [raid5] [raid4] md0 : active raid5 sdc1[3] sdd1[4] sdb1[0] 11720265216 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/3] [U_UU] md127 : inactive sdl2[0](S) sdj2[3](S) sdf2[1](S) 2196934199 blocks super 1.2 unused devices: <none> [No, I don't know why md127 was first a moment ago!] I tried reassembling the device, but mdadm doesn't like it. diskfarm:~ # mdadm --stop /dev/md127 mdadm: stopped /dev/md127 diskfarm:~ # mdadm --assemble --scan mdadm: /dev/md/750Graid5md assembled from 1 drive - not enough to start the array. mdadm: Found some drive for an array that is already active: /dev/md/0 mdadm: giving up. mdadm: No arrays found in config file or automatically But ... what's with just 1 drive? diskfarm:~ # for D in /dev/sd[fjl] ; do parted $D print ; done Model: ATA WDC WD7500BPKX-7 (scsi) Disk /dev/sdf: 750GB Sector size (logical/physical): 512B/4096B Partition Table: gpt Disk Flags: Number Start End Size File system Name Flags 2 1049kB 750GB 750GB ntfs Linux RAID raid 3 750GB 750GB 134MB ext3 Linux filesystem Model: ATA WDC WD7500BPKX-7 (scsi) Disk /dev/sdj: 750GB Sector size (logical/physical): 512B/4096B Partition Table: gpt Disk Flags: Number Start End Size File system Name Flags 2 1049kB 750GB 750GB ntfs Linux RAID raid 3 750GB 750GB 134MB xfs Linux filesystem Model: ATA Hitachi HDE72101 (scsi) Disk /dev/sdl: 1000GB Sector size (logical/physical): 512B/512B Partition Table: msdos Disk Flags: Number Start End Size Type File system Flags 1 1049kB 4227MB 4226MB primary ntfs diag, type=27 2 4227MB 754GB 750GB primary reiserfs raid, type=fd 3 754GB 754GB 134MB primary reiserfs type=83 4 754GB 1000GB 246GB primary reiserfs type=83 Slice 2 on each is the RAID partition, slice 3 on each is a little filesystem for bare-bones info, and slice 4 on sdl is a normal basic filesystem for scratch content. diskfarm:~ # mdadm --examine /dev/sd[fjl]2 | egrep '/dev|Name|Role|State|Checksum|Events|UUID' /dev/sdf2: Array UUID : 88575f01:592167fd:bd9f9ba1:a61fafc4 Name : diskfarm:750Graid5md (local to host diskfarm) State : clean Device UUID : e916fc67:b8b7fc59:51440134:fa431d02 Checksum : 43f9e7a4 - correct Events : 720 Device Role : Active device 1 Array State : AAA ('A' == active, '.' == missing, 'R' == replacing) /dev/sdj2: Array UUID : 88575f01:592167fd:bd9f9ba1:a61fafc4 Name : diskfarm:750Graid5md (local to host diskfarm) State : clean Device UUID : 0b847f84:83e80a3d:a0dc11e7:60bffc9f Checksum : 9522782b - correct Events : 177792 Device Role : Active device 2 Array State : A.A ('A' == active, '.' == missing, 'R' == replacing) /dev/sdl2: Array UUID : 88575f01:592167fd:bd9f9ba1:a61fafc4 Name : diskfarm:750Graid5md (local to host diskfarm) State : clean Device UUID : cc53440e:cb9180e4:be4c38d4:88a676eb Checksum : fef95256 - correct Events : 177794 Device Role : Active device 0 Array State : A.. ('A' == active, '.' == missing, 'R' == replacing) Slice f2 looks great, but slices j2 & l2 seem to be missing -- even though they are present. Worse, the Events counter on sdf2 is frighteningly small. Where did it go?!? So maybe I consider sdf2 failed and reassemble from the other two [only] and then put f2 back in? Definitely time to stop, take a deep breath, and ask for help :-) TIA & HAND :-D -- David T-G See http://justpickone.org/davidtg/email/ See http://justpickone.org/davidtg/tofu.txt