md looping on recovery of raid1 array

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Md loops forever when attempting to recover a 2 disk raid 1 array:

Jul  7 16:32:35 soho user.info kernel: md: recovery of RAID array md0
Jul  7 16:32:35 soho user.info kernel: md: minimum _guaranteed_  speed:
1000 B/sec/disk.
Jul  7 16:32:35 soho user.info kernel: md: using maximum available idle
IO bandwidth (but not more than 200000 KB/sec) for recovery.
Jul  7 16:32:35 soho user.info kernel: md: using 128k window, over a
total of 2096384 blocks.
Jul  7 16:32:35 soho user.err kernel: scsi 1:0:0:0: rejecting I/O to
dead device
Jul  7 16:32:35 soho user.err kernel: scsi 1:0:0:0: rejecting I/O to
dead device
Jul  7 16:32:35 soho user.alert kernel: raid1: dm-1: unrecoverable I/O
read error for block 0
Jul  7 16:32:35 soho user.err kernel: scsi 1:0:0:0: rejecting I/O to
dead device
Jul  7 16:32:35 soho user.alert kernel: raid1: dm-1: unrecoverable I/O
read error for block 128
Jul  7 16:32:35 soho user.info kernel: md: md0: recovery done.
Jul  7 16:32:35 soho user.err kernel: scsi 1:0:0:0: rejecting I/O to
dead device
Jul  7 16:32:35 soho user.err kernel: scsi 1:0:0:0: rejecting I/O to
dead device
Jul  7 16:32:35 soho user.alert kernel: raid1: dm-1: unrecoverable I/O
read error for block 256
Jul  7 16:32:35 soho user.err kernel: scsi 1:0:0:0: rejecting I/O to
dead device
Jul  7 16:32:35 soho user.alert kernel: raid1: dm-1: unrecoverable I/O
read error for block 384
Jul  7 16:32:35 soho user.err kernel: scsi 1:0:0:0: rejecting I/O to
dead device
Jul  7 16:32:35 soho user.warn kernel: md: super_written gets error=-5,
uptodate=0
Jul  7 16:32:35 soho user.warn kernel: RAID1 conf printout:
Jul  7 16:32:35 soho user.warn kernel:  --- wd:1 rd:2
Jul  7 16:32:35 soho user.warn kernel:  disk 0, wo:1, o:1, dev:dm-7 Jul
7 16:32:35 soho user.warn kernel:  disk 1, wo:0, o:1, dev:dm-1
Jul  7 16:32:35 soho user.warn kernel: RAID1 conf printout:
Jul  7 16:32:35 soho user.warn kernel:  --- wd:1 rd:2
Jul  7 16:32:35 soho user.warn kernel:  disk 1, wo:0, o:1, dev:dm-1
Jul  7 16:32:35 soho user.warn kernel: RAID1 conf printout:
Jul  7 16:32:35 soho user.warn kernel:  --- wd:1 rd:2
Jul  7 16:32:35 soho user.warn kernel:  disk 0, wo:1, o:1, dev:dm-7
Jul  7 16:32:35 soho user.warn kernel:  disk 1, wo:0, o:1, dev:dm-1
Jul  7 16:32:35 soho user.info kernel: md: recovery of RAID array md0
...

This occurs after hotswap removing both drives of the raid1 array and
reinserting them.  The kernel version is 2.6.19.  Is anyone familiar
with this scenario?  Can anyone shed any light on what's happening here?

Thanks.
- Michael
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux