On Wed, Jan 24, 2024 at 5:18 PM Yu Kuai <yukuai3@xxxxxxxxxx> wrote: > > First regression related to stop sync thread: > > The lifetime of sync_thread is designed as following: > > 1) Decide want to start sync_thread, set MD_RECOVERY_NEEDED, and wake up > daemon thread; > 2) Daemon thread detect that MD_RECOVERY_NEEDED is set, then set > MD_RECOVERY_RUNNING and register sync_thread; > 3) Execute md_do_sync() for the actual work, if it's done or > interrupted, it will set MD_RECOVERY_DONE and wake up daemone thread; > 4) Daemon thread detect that MD_RECOVERY_DONE is set, then clear > MD_RECOVERY_RUNNING and unregister sync_thread; > > In v6.7, we fix md/raid to follow this design by commit f52f5c71f3d4 > ("md: fix stopping sync thread"), however, dm-raid is not considered at > that time, and following test will hang: Hi Kuai Thanks very much for the patch set. I reported the dm raid deadlock when stopping dm raid and we had the patch set "[PATCH v5 md-fixes 0/3] md: fix stopping sync thread" which has patch f52f5c71f3d4. So we indeed considered dm-raid that time. Because we want to resolve the deadlock problem. I re-look patch f52f5c71f3d4. It has two major changes. One is to use a common function stop_sync_thread for stopping sync thread. This can fix the deadlock problem. The second change changes the way to reap sync thread. mdraid and dmraid reap sync thread in __md_stop_writes. So the patch looks overweight. Before f52f5c71f3d4 do_md_stop release reconfig_mutex before waiting sync_thread to finish. So there should not be the deadlock problem which has been fixed in 130443d60b1b ("md: refactor idle/frozen_sync_thread() to fix deadlock"). So we only need to change __md_stop_writes to stop sync thread like do_md_stop and reap sync thread directly. Maybe this can avoid deadlock? I'll try this way and give the test result. > > shell/integrity-caching.sh > shell/lvconvert-raid-reshape.sh > > This patch set fix the broken test by patch 1-4; > - patch 1 fix that step 4) is broken by suspended array; > - patch 2 fix that step 4) is broken by read-only array; > - patch 3 fix that step 3) is broken that md_do_sync() doesn't set > MD_RECOVERY_DONE; Noted that this patch will introdece new problem that > data will be corrupted, which will be fixed in later patches. > - patch 4 fix that setp 1) is broken that sync_thread is register and > MD_RECOVERY_RUNNING is set directly; > > With patch 1-4, the above test won't hang anymore, however, the test > will still fail and complain that ext4 is corrupted; For patch3, as I mentioned today, the root cause is dm-raid->rs_start_reshape sets MD_RECOVERY_WAIT. So md_do_sync returns when MD_RECOVERY_WAIT is set. It's the reason why dm-raid can't stop sync thread when start a new reshape. . The way in patch3 looks like a workaround. We need to figure out if dm raid really needs to set MD_RECOVERY_WAIT. Because now we stop sync thread in an asynchronous way. So the deadlock problem which was fixed in 644e2537f (dm raid: fix stripe adding reshape deadlock) may disappear. Maybe we can revert the patch. Best Regards Xiao > > Second regression related to frozen sync thread: > > Noted that for raid456, if reshape is interrupted, then call > "pers->start_reshape" will corrupt data. This is because dm-raid rely on > md_do_sync() doesn't set MD_RECOVERY_DONE so that new sync_thread won't > be registered, and patch 3 just break this. > > - Patch 5-6 fix this problem by interrupting reshape and frozen > sync_thread in dm_suspend(), then unfrozen and continue reshape in > dm_resume(). It's verified that dm-raid tests won't complain that > ext4 is corrupted anymore. > - Patch 7 fix the problem that raid_message() call > md_reap_sync_thread() directly, without holding 'reconfig_mutex'. > > Last regression related to dm-raid456 IO concurrent with reshape: > > For raid456, if reshape is still in progress, then IO across reshape > position will wait for reshape to make progress. However, for dm-raid, > in following cases reshape will never make progress hence IO will hang: > > 1) the array is read-only; > 2) MD_RECOVERY_WAIT is set; > 3) MD_RECOVERY_FROZEN is set; > > After commit c467e97f079f ("md/raid6: use valid sector values to determine > if an I/O should wait on the reshape") fix the problem that IO across > reshape position doesn't wait for reshape, the dm-raid test > shell/lvconvert-raid-reshape.sh start to hang at raid5_make_request(). > > For md/raid, the problem doesn't exist because: > > 1) If array is read-only, it can switch to read-write by ioctl/sysfs; > 2) md/raid never set MD_RECOVERY_WAIT; > 3) If MD_RECOVERY_FROZEN is set, mddev_suspend() doesn't hold > 'reconfig_mutex' anymore, it can be cleared and reshape can continue by > sysfs api 'sync_action'. > > However, I'm not sure yet how to avoid the problem in dm-raid yet. > > - patch 9-11 fix this problem by detecting the above 3 cases in > dm_suspend(), and fail those IO directly. > > If user really meet the IO error, then it means they're reading the wrong > data before c467e97f079f. And it's safe to read/write the array after > reshape make progress successfully. > > Tests: > > I already run the following two tests many times and verified that they > won't fail anymore: > > shell/integrity-caching.sh > shell/lvconvert-raid-reshape.sh > > For other tests, I'm still running. However, I'm sending this patchset > in case people think the fixes is not appropriate. Running the full > tests will cost lots of time in my VM, and I'll update full test results > soon. > > Yu Kuai (11): > md: don't ignore suspended array in md_check_recovery() > md: don't ignore read-only array in md_check_recovery() > md: make sure md_do_sync() will set MD_RECOVERY_DONE > md: don't register sync_thread for reshape directly > md: export helpers to stop sync_thread > dm-raid: really frozen sync_thread during suspend > md/dm-raid: don't call md_reap_sync_thread() directly > dm-raid: remove mddev_suspend/resume() > dm-raid: add a new helper prepare_suspend() in md_personality > md: export helper md_is_rdwr() > md/raid456: fix a deadlock for dm-raid456 while io concurrent with > reshape > > drivers/md/dm-raid.c | 76 +++++++++++++++++++++---------- > drivers/md/md.c | 104 ++++++++++++++++++++++++++++--------------- > drivers/md/md.h | 16 +++++++ > drivers/md/raid10.c | 16 +------ > drivers/md/raid5.c | 61 +++++++++++++------------ > 5 files changed, 171 insertions(+), 102 deletions(-) > > -- > 2.39.2 >