Re: MD RAID 1 fail/remove/add corruption in 3.10

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Attached are the test scripts:

  mdcreate - create three MD RAID1 pairs with internal bitmaps, create
             an {ext4,xfs,btrfs} filesystem

  break_md - loop between the two component disks, failing and then
             re-adding them.  Calls mdtest script in between test-runs.

  mdtest - stops fio tests on MD, umounts, issues RAID "check", fsck on
           each MD.  If the fsck fails, return failure.  If fsck is
           good, then mount and restart fio tests.

Usually I would see RAID mismatch_cnt of non-zero after the first or
second disk break.  Then, within a few iterations one of the fsck
programs (usually xfs or btrfs) would complain.

These scripts were cobbled together in the last day or two, so standard
disclaimers apply :)

Regards,

-- Joe

Attachment: break_md.sh
Description: application/shellscript

Attachment: mdcreate.sh
Description: application/shellscript

Attachment: mdtest.sh
Description: application/shellscript


[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux