On 07/11/12 17:55, joystick wrote:
we still need someone to test the other case, a more common scenario I'd
say: the disk to be replaced fails during hot-replace
I suspect I can do that by creating some "media errors" using hdparm
while the replacement is in progress.
I'd have *thought* that the procedure should be to re-direct the read to
another mirror (it is a raid-10 after all).
The test machine is on UPS, so I have not done any testing that
involves reboots during a re-sync.
And also this one...
Best simulation would be unpulling the plug so that disks do not flush
if you have still time for us, of course :-)
I assume to test that adequately (as in re-assemble an array with a
replacement in progress) I'd need to upgrade mdadm to the latest git code?
No biggie, just requires a bit more tweaking the initramfs stuff.
The tests thus far have been conducted in spare drive slots in my
production server. This afternoon I picked up a new motherboard for my
test box so tomorrow I'll have a dedicated machine to play with.
Debian on SSD, and 10 x 1TB 7200 RPM drives that have been retired from
active hard service (they have about 22,000 hours on them) and ready as
a test mule.
If you put together a set of tests you'd like performed I'll be happy to
run them and see what happens. The machine is on a managed APC PDU (yay
Gumtree!), so remote power cycling is a lot easier than it used to be
and I really don't mind hammering the disks with emergency parks or
excessive cycles.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html