I'm quite familiar with mdadm, so when my RAID5 of 4 disks had a disk with serious errors I wasn't worried. The ops person pulled the wrong disk, so I had the classic double disk failure. So the disks: /dev/sda3 healthy /dev/sdb3 healthy /dev/sdc3 dying, been out of the RAID for 24 hours /dev/sdd3 accidentally pulled around 6pm. The events were almost the same: # ./mdadm -E /dev/sd[abd]3 | grep "Events" Events : 15681 Events : 15681 Events : 15676 The update times: # ./mdadm -E /dev/sd[abd]3 | grep "Update" Update Time : Thu May 31 17:59:23 2018 Update Time : Thu May 31 17:59:23 2018 Update Time : Thu May 31 17:56:50 2018 Sanity check: # ./mdadm -E /dev/sd[abd]3 | grep "Array UUI" Array UUID : 0faeb093:be173348:32e65c78:26f61e06 Array UUID : 0faeb093:be173348:32e65c78:26f61e06 Array UUID : 0faeb093:be173348:32e65c78:26f61e06 So I figured, no big deal, lets assemble them: # mdadm --assemble -v -v -v --force /dev/md1 /dev/sda3 /dev/sdb3 /dev/sdd3 | nc termbin.com 9999 mdadm: looking for devices for /dev/md1 mdadm: UUID differs from /dev/md0. mdadm: /dev/sda3 is identified as a member of /dev/md1, slot 0. mdadm: /dev/sdb3 is identified as a member of /dev/md1, slot 1. mdadm: /dev/sdd3 is identified as a member of /dev/md1, slot 3. mdadm: added /dev/sdb3 to /dev/md1 as 1 mdadm: no uptodate device for slot 4 of /dev/md1 mdadm: added /dev/sdd3 to /dev/md1 as 3 (possibly out of date) mdadm: added /dev/sda3 to /dev/md1 as 0 mdadm: /dev/md1 assembled from 2 drives - not enough to start the array. I googled around, and did find a few people that posted things like "--force being ignored". But no resolution. I poured over the mdadm -E output, dmesg, syslog, etc. Ended up just finding things like: [10325.981522] md: unbind<sda3> [10326.013207] md: export_rdev(sda3) [10326.013307] md: unbind<sdd3> [10326.045221] md: export_rdev(sdd3) [10326.045306] md: unbind<sdb3> [10326.073251] md: export_rdev(sdb3) Really seemed like it should assemble. So I figured maybe ubuntu-16.04 with mdadm 3.3-2ubuntu7.6 had a bug. So I downloaded the latest stable source to mdadm 4.0, ran make, and ran that version: # ./mdadm --assemble -v -v -v --force /dev/md1 /dev/sd[abd]3 mdadm: looking for devices for /dev/md1 mdadm: /dev/sda3 is identified as a member of /dev/md1, slot 0. mdadm: /dev/sdb3 is identified as a member of /dev/md1, slot 1. mdadm: /dev/sdd3 is identified as a member of /dev/md1, slot 3. mdadm: forcing event count in /dev/sdd3(3) from 15676 upto 15681 mdadm: clearing FAULTY flag for device 2 in /dev/md1 for /dev/sdd3 mdadm: Marking array /dev/md1 as 'clean' mdadm: added /dev/sdb3 to /dev/md1 as 1 mdadm: no uptodate device for slot 2 of /dev/md1 mdadm: added /dev/sdd3 to /dev/md1 as 3 mdadm: added /dev/sda3 to /dev/md1 as 0 mdadm: /dev/md1 has been started with 3 drives (out of 4). It worked, fsck reporting nothing important, things mounted, and it's been in use with no problems. Bug? Or maybe just an improvement to 4.0? Should I file a bug with ubuntu? -- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html