Hello. Recently had problems with my raid 1 devices booting in degraded mode after having experienced a power failure . Upon reboot I ran raidhotadd on each of the five md devices, and they all appeared to reconstruct fine according to /proc/mdstat output: Personalities : [raid1] read_ahead 1024 sectors md1 : active raid1 hda8[0] hdc8[1] 1325248 blocks [2/2] [UU] md2 : active raid1 hda6[0] hdc6[1] 521984 blocks [2/2] [UU] md0 : active raid1 hda5[0] hdc5[1] 4096448 blocks [2/2] [UU] md3 : active raid1 hda3[0] hdc3[1] 61440512 blocks [2/2] [UU] md4 : active raid1 hda1[0] hdc2[1] 264960 blocks [2/2] [UU] unused devices: <none> ****** I have since then noticed the folowing results from lsraid: lsraid -a /dev/md4 [dev 9, 4] /dev/md4 48F7D4B2.59257267.3922C6D5.2AD38650 online [dev 3, 1] /dev/hda1 48F7D4B2.59257267.3922C6D5.2AD38650 good lsraid -d /dev/hda1 [dev 9, 4] /dev/md4 48F7D4B2.59257267.3922C6D5.2AD38650 online [dev 3, 1] /dev/hda1 48F7D4B2.59257267.3922C6D5.2AD38650 good lsraid -d /dev/hdc2 [dev 9, 4] /dev/md4 48F7D4B2.59257267.3922C6D5.2AD38650 online [dev 3, 1] /dev/hda1 48F7D4B2.59257267.3922C6D5.2AD38650 good [dev 22, 2] /dev/hdc2 48F7D4B2.59257267.3922C6D5.2AD38650 unbound # This raidtab was generated by lsraid version 0.7.0. # It was created from a query on the following devices: # /dev/md2 # md device [dev 9, 2] /dev/md4 queried online raiddev /dev/md4 raid-level 1 nr-raid-disks 2 nr-spare-disks 0 persistent-superblock 1 chunk-size 64 device /dev/hda1 raid-disk 0 device /dev/null failed-disk 1 Shouldn't the /dev/hdc2 device be appearing here? I dont understand why /proc/mdstat shows the /dev/hdcX devices as up but lsraid -R lists them as /dev/null-"failed-disk". Shouldn't both raid superblocks have been updated to list all devices that comprise that particular raid device (/dev/hda1 AND /dev/hdc2 in the above case) ? And what is the significance of the "unbound" state for /dev/hdc2 ? Does that refer to there not being a raid superblock entry that points to /dev/hdc2 in the raid device? I'm guesing the superblocks are not supposed to look like mine and I need to mkraid -f to update the raid superblocks but am perplexed how they could have gotten like this. Thanks for any help you can provide , Arturo additional info: Linux version 2.4.18-14 Red Hat Linux 8.0 3.2-7 md: md driver 0.90.0 MAX_MD_DEVS=256, MD_SB_DISKS=27 md: Autodetecting RAID arrays. kernel: [events: 000000a1] kernel: [events: 0000009f] kernel: md: considering hdc2 ... kernel: md: adding hdc2 ... kernel: md: adding hda1 ... kernel: md: created md4 kernel: md: bind<hda1,1> kernel: md: bind<hdc2,2> kernel: md: running: <hdc2><hda1> kernel: md: hdc2's event counter: 000000b9 kernel: md: hda1's event counter: 000000b9 kernel: md: RAID level 1 does not need chunksize! Continuing anyway. kernel: kmod: failed to exec /sbin/modprobe -s -k md-personality-3, errno = 2 kernel: md: personality 3 is not loaded! kernel: md :do_md_run() returned -22 kernel: md: md4 stopped. kernel: md: unbind<hdc2,1> kernel: md: export_rdev(hdc2) kernel: md: unbind<hda1,0> kernel: md: export_rdev(hda1) kernel: md: ... autorun DONE. kernel: md: raid1 personality registered as nr 3 kernel: md: Autodetecting RAID arrays. kernel: md: autorun ... kernel: md: considering hda1 ... kernel: md: adding hda1 ... kernel: md: adding hdc2 ... kernel: md: created md4 kernel: md: bind<hdc2,1> kernel: md: bind<hda1,2> kernel: md: running: <hda1><hdc2> kernel: md: hda1's event counter: 000000b9 kernel: md: hdc2's event counter: 000000b9 kernel: md: RAID level 1 does not need chunksize! Continuing anyway. kernel: md4: max total readahead window set to 508k kernel: md4: 1 data-disks, max readahead per data-disk: 508k kernel: raid1: raid set md4 active with 2 out of 2 mirrors kernel: md: updating md4 RAID superblock on device kernel: md: hda1 [events: 000000ba]<6>(write) hda1's sb offset: 264960 kernel: md: hdc2 [events: 000000ba]<6>(write) hdc2's sb offset: 264960 fsck: /dev/md4: clean, 23398/66264 files, 92323/264960 blocks - To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html