Re: Spurious bad blocks on RAID-5

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 25/05/2021 01:41, David Woodhouse wrote:
I see no actual I/O errors on the underlying drives, and S.M.A.R.T
reports them healthy. Yet MD thinks I have bad blocks on three of them
at exactly the same location:

Bad-blocks list is empty in /dev/sda3
Bad-blocks list is empty in /dev/sdb3
Bad-blocks on /dev/sdc3:
          13086517288 for 32 sectors
Bad-blocks on /dev/sdd3:
          13086517288 for 32 sectors
Bad-blocks on /dev/sde3:
          13086517288 for 32 sectors

That seems very unlikely to me. FWIW those ranges are readable on the
underlying disks, and contain all zeroes.

Is the best option still to reassemble the array with
'--update=force-no-bbl'? Will that*clear*  the BBL so that I can
subsequently assemble it with '--update=bbl' without losing those
sectors again?

A lot of people swear AT md badblocks. If you assemble with force no-bbl, the recommendation will be to NOT re-enable it.

Personally, I'd recommend using dm-integrity rather than badblocks, but that (a) chews up some disk space, and (b) is not very well tested with mdraid at the moment.

For example, md-raid should NEVER have any permanent bad blocks, because it's a logical layer, and the physical layer will fix it behind raid's back. But raid seems to accumulate and never clear bad-blocks ...

Cheers,
Wol



[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux