Re: Spurious bad blocks on RAID-5

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





Am 25.05.21 um 18:49 schrieb antlists:
On 25/05/2021 01:41, David Woodhouse wrote:
I see no actual I/O errors on the underlying drives, and S.M.A.R.T
reports them healthy. Yet MD thinks I have bad blocks on three of them
at exactly the same location:

Bad-blocks list is empty in /dev/sda3
Bad-blocks list is empty in /dev/sdb3
Bad-blocks on /dev/sdc3:
          13086517288 for 32 sectors
Bad-blocks on /dev/sdd3:
          13086517288 for 32 sectors
Bad-blocks on /dev/sde3:
          13086517288 for 32 sectors

That seems very unlikely to me. FWIW those ranges are readable on the
underlying disks, and contain all zeroes.

Is the best option still to reassemble the array with
'--update=force-no-bbl'? Will that*clear*  the BBL so that I can
subsequently assemble it with '--update=bbl' without losing those
sectors again?

A lot of people swear AT md badblocks. If you assemble with force no-bbl, the recommendation will be to NOT re-enable it.

Personally, I'd recommend using dm-integrity rather than badblocks, but that (a) chews up some disk space, and (b) is not very well tested with mdraid at the moment.

For example, md-raid should NEVER have any permanent bad blocks, because it's a logical layer, and the physical layer will fix it behind raid's back. But raid seems to accumulate and never clear bad-blocks ...

and why is that crap implemented in a way that replace a disk with bad blocks don't reset the bad blocks for the given device?

that's pervert



[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux