Update my findings so far. On Tue, Jan 30, 2024 at 8:27 AM Blazej Kucman <blazej.kucman@xxxxxxxxxxxxxxx> wrote: [...] > Our daily tests directed at mdadm/md also detected a problem with > identical symptoms as described in the thread. > > Issue detected with IMSM metadata but it also reproduces with native > metadata. > NVMe disks under VMD controller were used. > > Scenario: > 1. Create raid10: > mdadm --create /dev/md/r10d4s128-15_A --level=10 --chunk=128 > --raid-devices=4 /dev/nvme6n1 /dev/nvme2n1 /dev/nvme3n1 /dev/nvme0n1 > --size=7864320 --run > 2. Create FS > mkfs.ext4 /dev/md/r10d4s128-15_A > 3. Set faulty one raid member: > mdadm --set-faulty /dev/md/r10d4s128-15_A /dev/nvme3n1 With a failed drive, md_thread calls md_check_recovery() and kicks off mddev->sync_work, which is md_start_sync(). md_check_recovery() also sets MD_RECOVERY_RUNNING. md_start_sync() calls mddev_suspend() and waits for mddev->active_io to become zero. > 4. Stop raid devies: > mdadm -Ss This command calls stop_sync_thread() and waits for MD_RECOVERY_RUNNING to be cleared. Given we need a working file system to reproduce the issue, I suspect the problem comes from active_io. Yu Kuai, I guess we missed this case in the recent refactoring. I don't have a good idea to fix this. Please also take a look into this. Thanks, Song