[PATCH v5 00/14] dm-raid/md/raid: fix v6.7 regressions

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



From: Yu Kuai <yukuai3@xxxxxxxxxx>

Changes in v5:
 - remove the patch to wait for bio completion while removing dm disk;
 - add patch 6;
 - reorder the patches, patch 1-8 is for md/raid, and patch 9-14 is
 related to dm-raid;

Changes in v4:
 - add patch 10 to fix a raid456 deadlock(for both md/raid and dm-raid);
 - add patch 13 to wait for inflight IO completion while removing dm
 device;

Changes in v3:
 - fix a problem in patch 5;
 - add patch 12;

Changes in v2:
 - replace revert changes for dm-raid with real fixes;
 - fix dm-raid5 deadlock that exist for a long time, this deadlock is
 triggered because another problem is fixed in raid5, and instead of
 deadlock, user will read wrong data before v6.7, patch 9-11;

First regression related to stop sync thread:

The lifetime of sync_thread is designed as following:

1) Decide want to start sync_thread, set MD_RECOVERY_NEEDED, and wake up
daemon thread;
2) Daemon thread detect that MD_RECOVERY_NEEDED is set, then set
MD_RECOVERY_RUNNING and register sync_thread;
3) Execute md_do_sync() for the actual work, if it's done or
interrupted, it will set MD_RECOVERY_DONE and wake up daemone thread;
4) Daemon thread detect that MD_RECOVERY_DONE is set, then clear
MD_RECOVERY_RUNNING and unregister sync_thread;

In v6.7, we fix md/raid to follow this design by commit f52f5c71f3d4
("md: fix stopping sync thread"), however, dm-raid is not considered at
that time, and following test will hang:

shell/integrity-caching.sh
shell/lvconvert-raid-reshape.sh

This patch set fix the broken test by patch 1-4;
 - patch 1 fix that step 4) is broken by suspended array;
 - patch 2 fix that step 4) is broken by read-only array;
 - patch 3 fix that step 3) is broken that md_do_sync() doesn't set
 MD_RECOVERY_DONE; Noted that this patch will introdece new problem that
 data will be corrupted, which will be fixed in later patches.
 - patch 4 fix that setp 1) is broken that sync_thread is register and
 MD_RECOVERY_RUNNING is set directly, md/raid behaviour, not related to
 dm-raid;

With patch 1-4, the above test won't hang anymore, however, the test
will still fail and complain that ext4 is corrupted;


Second regression is found by code review, interrupted reshape
concurrent with IO can deadlock, patch 5;


Third regression fix 'active_io' leakage, patch 6;


The fifth regression related to frozen sync thread:

Noted that for raid456, if reshape is interrupted, then call
"pers->start_reshape" will corrupt data. And dm-raid rely on md_do_sync()
doesn't set MD_RECOVERY_DONE so that new sync_thread won't be registered,
and patch 3 just break this.

 - Patch 9 fix this problem by interrupting reshape and frozen
 sync_thread in dm_suspend(), then unfrozen and continue reshape in
dm_resume(). It's verified that dm-raid tests won't complain that
ext4 is corrupted anymore.
 - Patch 10 fix the problem that raid_message() call
 md_reap_sync_thread() directly, without holding 'reconfig_mutex'.


Last regression related to dm-raid456 IO concurrent with reshape:

For raid456, if reshape is still in progress, then IO across reshape
position will wait for reshape to make progress. However, for dm-raid,
in following cases reshape will never make progress hence IO will hang:

1) the array is read-only;
2) MD_RECOVERY_WAIT is set;
3) MD_RECOVERY_FROZEN is set;

After commit c467e97f079f ("md/raid6: use valid sector values to determine
if an I/O should wait on the reshape") fix the problem that IO across
reshape position doesn't wait for reshape, the dm-raid test
shell/lvconvert-raid-reshape.sh start to hang at raid5_make_request().

For md/raid, the problem doesn't exist because:

1) If array is read-only, it can switch to read-write by ioctl/sysfs;
2) md/raid never set MD_RECOVERY_WAIT;
3) If MD_RECOVERY_FROZEN is set, mddev_suspend() doesn't hold
   'reconfig_mutex' anymore, it can be cleared and reshape can continue by
   sysfs api 'sync_action'.

However, I'm not sure yet how to avoid the problem in dm-raid yet.

 - patch 11,12 fix this problem by detecting the above 3 cases in
 dm_suspend(), and fail those IO directly.

If user really meet the IO error, then it means they're reading the wrong
data before c467e97f079f. And it's safe to read/write the array after
reshape make progress successfully.

There are also some other minor changes: patch 8 and patch 12;


Test result (for v4, I don't think it's necessary to test this patchset
again for v5, except for a new fix, patch 6, which is tested separately,
there are no other functional changes):

I apply this patchset on top of v6.8-rc1, and run lvm2 tests suite with
folling cmd for 24 round(for about 2 days):

for t in `ls test/shell`; do
        if cat test/shell/$t | grep raid &> /dev/null; then
                make check T=shell/$t
        fi
done

failed count                             failed test
      1 ###       failed: [ndev-vanilla] shell/dmsecuretest.sh
      1 ###       failed: [ndev-vanilla] shell/dmsetup-integrity-keys.sh
      1 ###       failed: [ndev-vanilla] shell/dmsetup-keyring.sh
      5 ###       failed: [ndev-vanilla] shell/duplicate-pvs-md0.sh
      1 ###       failed: [ndev-vanilla] shell/duplicate-vgid.sh
      2 ###       failed: [ndev-vanilla] shell/duplicate-vgnames.sh
      1 ###       failed: [ndev-vanilla] shell/fsadm-crypt.sh
      1 ###       failed: [ndev-vanilla] shell/integrity.sh
      6 ###       failed: [ndev-vanilla] shell/lvchange-raid1-writemostly.sh
      2 ###       failed: [ndev-vanilla] shell/lvchange-rebuild-raid.sh
      5 ###       failed: [ndev-vanilla] shell/lvconvert-raid-reshape-stripes-load-reload.sh
      4 ###       failed: [ndev-vanilla] shell/lvconvert-raid-restripe-linear.sh
      1 ###       failed: [ndev-vanilla] shell/lvconvert-raid1-split-trackchanges.sh
     20 ###       failed: [ndev-vanilla] shell/lvconvert-repair-raid.sh
     20 ###       failed: [ndev-vanilla] shell/lvcreate-large-raid.sh
     24 ###       failed: [ndev-vanilla] shell/lvextend-raid.sh

And I ramdomly pick some tests verified by hand that these test will
fail in v6.6 as well(not all tests):

shell/lvextend-raid.sh
shell/lvcreate-large-raid.sh
shell/lvconvert-repair-raid.sh
shell/lvchange-rebuild-raid.sh
shell/lvchange-raid1-writemostly.sh

Xiao Ni also test the last version on a real machine, see [1].

[1] https://lore.kernel.org/all/CALTww29QO5kzmN6Vd+jT=-8W5F52tJjHKSgrfUc1Z1ZAeRKHHA@xxxxxxxxxxxxxx/

Yu Kuai (14):
  md: don't ignore suspended array in md_check_recovery()
  md: don't ignore read-only array in md_check_recovery()
  md: make sure md_do_sync() will set MD_RECOVERY_DONE
  md: don't register sync_thread for reshape directly
  md: don't suspend the array for interrupted reshape
  md: fix missing release of 'active_io' for flush
  md: export helpers to stop sync_thread
  md: export helper md_is_rdwr()
  dm-raid: really frozen sync_thread during suspend
  md/dm-raid: don't call md_reap_sync_thread() directly
  dm-raid: add a new helper prepare_suspend() in md_personality
  md/raid456: fix a deadlock for dm-raid456 while io concurrent with
    reshape
  dm-raid: fix lockdep waring in "pers->hot_add_disk"
  dm-raid: remove mddev_suspend/resume()

 drivers/md/dm-raid.c |  78 +++++++++++++++++++--------
 drivers/md/md.c      | 126 +++++++++++++++++++++++++++++--------------
 drivers/md/md.h      |  16 ++++++
 drivers/md/raid10.c  |  16 +-----
 drivers/md/raid5.c   |  61 +++++++++++----------
 5 files changed, 192 insertions(+), 105 deletions(-)

-- 
2.39.2





[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux