Re: [PATCH 3/3] raid5: introduce MD_BROKEN

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, 25 Feb 2022 15:22:00 +0800
Guoqing Jiang <guoqing.jiang@xxxxxxxxx> wrote:

> >> If one member disk was set Faulty which caused BROKEN was set, is
> >> it possible to re-add the same member disk again?
> >>  
> > Is possible to re-add drive to failed raid5 array now? From my
> > understanding of raid5_add_disk it is not possible.  
> 
> I mean the below steps, it works as you can see.
> 
> >> [root@vm ~]# echo faulty > /sys/block/md0/md/dev-loop1/state
> >> [root@vm ~]# cat /proc/mdstat
> >> Personalities : [raid6] [raid5] [raid4]
> >> md0 : active raid5 loop2[2] loop1[0](F)
> >>         1046528 blocks super 1.2 level 5, 512k chunk, algorithm 2
> >> [2/1] [_U] bitmap: 0/1 pages [0KB], 65536KB chunk
> >>
> >> unused devices: <none>
> >> [root@vm ~]# echo re-add > /sys/block/md0/md/dev-loop1/state
> >> [root@vm ~]# cat /proc/mdstat
> >> Personalities : [raid6] [raid5] [raid4]
> >> md0 : active raid5 loop2[2] loop1[0]
> >>         1046528 blocks super 1.2 level 5, 512k chunk, algorithm 2
> >> [2/2] [UU] bitmap: 0/1 pages [0KB], 65536KB chunk
> >>
> >> unused devices: <none>


In this case array is not failed (it is degraded). For that reason I
think that my changes are not related.

# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4] [raid1] [raid10]
md127 : active raid5 nvme5n1[1] nvme4n1[0](F)
      5242880 blocks super 1.2 level 5, 512k chunk, algorithm 2 [2/1]
[_U]

unused devices: <none>
# cat /sys/block/md127/md/array_state clean

# mdadm -D /dev/md127
/dev/md127:
           Version : 1.2
     Creation Time : Thu Mar  3 18:49:53 2022
        Raid Level : raid5
        Array Size : 5242880 (5.00 GiB 5.37 GB)
     Used Dev Size : 5242880 (5.00 GiB 5.37 GB)
      Raid Devices : 2
     Total Devices : 2
       Persistence : Superblock is persistent

       Update Time : Thu Mar  3 18:52:46 2022
             State : clean, degraded
    Active Devices : 1
   Working Devices : 1
    Failed Devices : 1
     Spare Devices : 0

            Layout : left-symmetric
        Chunk Size : 512K

Consistency Policy : resync

              Name : gklab-localhost:vol  (local to host
              gklab-localhost)
              UUID : 711594e8:73ef988c:87a85085:b30c838d
            Events : 8

    Number   Major   Minor   RaidDevice State
       -       0        0        0      removed
       1     259        9        1      active sync   /dev/nvme5n1

       0     259        5        -      faulty   /dev/nvme4n1


Do I miss something?

Thanks,
Mariusz




[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux