[PATCH 15/19] clustermd_tests: add test case to test manage_re-add against cluster-raid10

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



02r10_Manage_re-add:
2 active disk in array, set 1 disk 'fail' and 'remove' it from array,
then re-add the disk back to array and triggers recovery.

Signed-off-by: Zhilong Liu <zlliu@xxxxxxxx>
---
 clustermd_tests/02r10_Manage_re-add | 18 ++++++++++++++++++
 1 file changed, 18 insertions(+)
 create mode 100644 clustermd_tests/02r10_Manage_re-add

diff --git a/clustermd_tests/02r10_Manage_re-add b/clustermd_tests/02r10_Manage_re-add
new file mode 100644
index 0000000..2288a00
--- /dev/null
+++ b/clustermd_tests/02r10_Manage_re-add
@@ -0,0 +1,18 @@
+#!/bin/bash
+
+mdadm -CR $md0 -l10 -b clustered --layout n2 -n2 $dev0 $dev1 --assume-clean
+ssh $NODE2 mdadm -A $md0 $dev0 $dev1
+check all nosync
+check all raid10
+check all bitmap
+check all state UU
+check all dmesg
+mdadm --manage $md0 --fail $dev0 --remove $dev0
+mdadm --manage $md0 --re-add $dev0
+check $NODE1 recovery
+check all wait
+check all state UU
+check all dmesg
+stop_md all $md0
+
+exit 0
-- 
2.6.6

--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux