Perhaps replacing that drive should be a good idea, then. Also - don't mdadm --replace it - better --fail, --remove and --add. Otherwise, md might just replicate the badblock list unless you turn the shite off first. ZFS handles this far better, that's true, but I chose to go back to md since zfs isn't really very flexible. Vennlig hilsen roy -- Roy Sigurd Karlsbakk (+47) 98013356 http://blogg.karlsbakk.net/ GPG Public key: http://karlsbakk.net/roysigurdkarlsbakk.pubkey.txt -- Hið góða skaltu í stein höggva, hið illa í snjó rita. ----- Original Message ----- > From: "esqychd2f5" <esqychd2f5@xxxxxxxxxxxxxx> > To: "Linux Raid" <linux-raid@xxxxxxxxxxxxxxx> > Sent: Sunday, 17 July, 2022 00:31:00 > Subject: Re: Determining cause of md RAID 'recovery interrupted' > Roy, > > Thanks, one of the drives does have a badblocks list (/dev/sdc). It > is also the only one with reallocated sectors (56 of them). No drives > have a non-zero 'Current_Pending_Sector'. > > This would explain why the rebuild worked for a while, but not now. > It doesn't explain whatever was happening to make drives drop out of > the array, but I'll resolve the rebuild issue before digging into > that. I want to understand what is failing here before I put the > drives back into use. I may leave the drive with reallocated sectors > out when I reuse the drives. > > The MD array has a ZFS volume on it (meaning it should get checksum > errors if there are problems), and doesn't have critical data, so I > followed your instructions on how to disable the BBL and restarted the > rebuild. I'll see if it completes now. > > Joe