Spurious bad blocks on RAID-5

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I seem to be seeing a similar problem to the one reported at
https://marc.info/?l=linux-raid&m=151492849314453&w=2

I have a bunch of sectors which I can't read on my RAID-5.

[42184.038703] Buffer I/O error on dev md127, logical block 6543126661, async page read
[42184.077157] Buffer I/O error on dev md127, logical block 6543126662, async page read
[42184.099128] Buffer I/O error on dev md127, logical block 6543126663, async page read
[42184.110579] Buffer I/O error on dev md127, logical block 6543126664, async page read
[42184.119073] Buffer I/O error on dev md127, logical block 6543126789, async page read
[42184.127457] Buffer I/O error on dev md127, logical block 6543126790, async page read
[42184.135790] Buffer I/O error on dev md127, logical block 6543126791, async page read
[42184.144212] Buffer I/O error on dev md127, logical block 6543126792, async page read
[42184.152812] Buffer I/O error on dev md127, logical block 6543126917, async page read
[42184.161392] Buffer I/O error on dev md127, logical block 6543126918, async page read
[42198.249606] Buffer I/O error on dev md127, logical block 6543126919, async page read
[42198.295172] Buffer I/O error on dev md127, logical block 6543126920, async page read

The file system is ext4, and its fsck can't *write* to these sectors
either:
Buffer I/O error on dev md127, logical block 6543126919, lost async page write

I see no actual I/O errors on the underlying drives, and S.M.A.R.T
reports them healthy. Yet MD thinks I have bad blocks on three of them
at exactly the same location:

Bad-blocks list is empty in /dev/sda3
Bad-blocks list is empty in /dev/sdb3
Bad-blocks on /dev/sdc3:
         13086517288 for 32 sectors
Bad-blocks on /dev/sdd3:
         13086517288 for 32 sectors
Bad-blocks on /dev/sde3:
         13086517288 for 32 sectors

That seems very unlikely to me. FWIW those ranges are readable on the
underlying disks, and contain all zeroes.

Is the best option still to reassemble the array with
'--update=force-no-bbl'? Will that *clear* the BBL so that I can
subsequently assemble it with '--update=bbl' without losing those
sectors again?


The pattern of offending blocks here looks remarkably similar to the
previous report. Is there any clue how it happened?

It seemed to *start* with a 'lost async page write' message just like
the above, and ext4 mounting the file system readonly. I rebooted, to
find that fsck couldn't write those blocks.

# mdadm --detail /dev/md127
/dev/md127:
           Version : 1.2
     Creation Time : Thu Jun 25 20:46:52 2020
        Raid Level : raid5
        Array Size : 31245086720 (29797.64 GiB 31994.97 GB)
     Used Dev Size : 7811271680 (7449.41 GiB 7998.74 GB)
      Raid Devices : 5
     Total Devices : 5
       Persistence : Superblock is persistent

     Intent Bitmap : Internal

       Update Time : Tue May 25 00:34:40 2021
             State : active, checking 
    Active Devices : 5
   Working Devices : 5
    Failed Devices : 0
     Spare Devices : 0

            Layout : left-symmetric
        Chunk Size : 512K

Consistency Policy : bitmap

      Check Status : 58% complete

              Name : desiato_root
              UUID : 29124898:0e6a5ad0:bd30e229:64129ed0
            Events : 758338

    Number   Major   Minor   RaidDevice State
       7       8        3        0      active sync   /dev/sda3
       1       8       19        1      active sync   /dev/sdb3
       8       8       35        2      active sync   /dev/sdc3
       9       8       51        3      active sync   /dev/sdd3
       5       8       67        4      active sync   /dev/sde3

# uname -a
Linux desiato.infradead.org 5.11.20-200.fc33.x86_64 #1 SMP Wed May 12
12:48:34 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux

<<attachment: smime.p7s>>


[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux