[PATCH 19/19] clustermd_tests: add test case to test switch-recovery against cluster-raid10

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



03r10_switch-recovery:
Create new array with 2 active and 1 spare disk, set 1 active disk as 'fail',
it triggers recovery and the spare disk would replace the failure disk, then
stop the array in doing recovery node, the other node would take it over and
continue to complete the recovery.

Signed-off-by: Zhilong Liu <zlliu@xxxxxxxx>
---
 clustermd_tests/03r10_switch-recovery | 21 +++++++++++++++++++++
 1 file changed, 21 insertions(+)
 create mode 100644 clustermd_tests/03r10_switch-recovery

diff --git a/clustermd_tests/03r10_switch-recovery b/clustermd_tests/03r10_switch-recovery
new file mode 100644
index 0000000..867388d
--- /dev/null
+++ b/clustermd_tests/03r10_switch-recovery
@@ -0,0 +1,21 @@
+#!/bin/bash
+
+mdadm -CR $md0 -l10 -b clustered --layout n2 -n2 -x1 $dev0 $dev1 $dev2 --assume-clean
+ssh $NODE2 mdadm -A $md0 $dev0 $dev1 $dev2
+check all nosync
+check all raid10
+check all bitmap
+check all spares 1
+check all state UU
+check all dmesg
+mdadm --manage $md0 --fail $dev0
+sleep 0.2
+check $NODE1 recovery
+stop_md $NODE1 $md0
+check $NODE2 recovery
+check $NODE2 wait
+check $NODE2 state UU
+check all dmesg
+stop_md $NODE2 $md0
+
+exit 0
-- 
2.6.6

--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux