Re: RAID6, failed device, unresponsive system?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



[ ... ]

>> Why is the system unresponsive, shouldn't it still be OK
>> after a drive failure?

There is a bit of a difference between a "drive failure" and
some/several bad sectors on a drive.

It is also to wonder whether the partially defective drive has
been "failed" and "removed" from the MD set and perhaps
"deleted" using '/sys/block/sdb/device/delete'.

> Hm, I'm seeing this in dmesg, could it be related? (ioctl lock)

> [425480.928740] md/raid:md0: read error corrected (8 sectors at
> 223617240 on sdb1)

Note the "read error corrected" (*corrected*) and that is is "8
sectors" may indicate it is one of the drives with 4096B sectors
that is configured as if it has 512B ones.

[ ... ]

Overall it is likely that you have just implicitly discovered
how important short settings for Error Recovery Control are, and
to choose drives that allow you to set them:

  http://www.sabi.co.uk/blog/1103Mar.html#110331
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux