Re: Logging-Loop when a drive in a raid1 fails.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Michael Renner wrote:
Paul Clements wrote:

I don't think you should be. md in 2.6 (as of 2.6.9 or so) is as stable as 2.4, at least according to our stress tests.

Including semi-dead/dying drives? As I said, normal operation is rock solid, it's just the edgy, hardly used stuff which tend(s|ed) to break.

Well, we stress test with nbd under raid1. nbd has the nice property that it gives I/O errors (read or write, depending on what's going on at the time) when its network connection is broken. So, in our tests we break and reconnect the nbd connection periodically, while doing heavy I/O and, of course, the resync activity of raid1 kicks in on top of that when the failed device is re-added to the array.


Both 2.4 and 2.6 md are rock solid under several days of this type of testing.

--
Paul
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux