On 7/18/22 8:20 PM, Nix wrote:
So I have a pair of RAID-6 mdraid arrays on this machine (one of which has a bcache layered on top of it, with an LVM VG stretched across both). Kernel 5.16 + mdadm 4.0 (I know, it's old) works fine, but I just rebooted into 5.18.12 and it failed to assemble. mdadm didn't display anything useful: an mdadm --assemble --scan --auto=md --freeze-reshape simply didn't find anything to assemble, and after that nothing else was going to work. But rebooting into 5.16 worked fine, so everything was (thank goodness) actually still there. Alas I can't say what the state of the blockdevs was (other than that they all seemed to be in /dev, and I'm using DEVICE partitions so they should all have been spotte
I suppose the array was built on top of partitions, then my wild guess is the problem is caused by the change in block layer (1ebe2e5f9d68?), maybe we need something similar in loop driver per b9684a71. diff --git a/drivers/md/md.c b/drivers/md/md.c index c7ecb0bffda0..e5f2e55cb86a 100644 --- a/drivers/md/md.c +++ b/drivers/md/md.c @@ -5700,6 +5700,7 @@ static int md_alloc(dev_t dev, char *name) mddev->queue = disk->queue; blk_set_stacking_limits(&mddev->queue->limits); blk_queue_write_cache(mddev->queue, true, true); + set_bit(GD_SUPPRESS_PART_SCAN, &disk->state); disk->events |= DISK_EVENT_MEDIA_CHANGE; mddev->gendisk = disk; error = add_disk(disk); Thanks, Guoqing