Re: Add spare disk to raid50

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> Read the man page (spare-group parameter), mdadm allows for one spare
> drive to be used
> for multiple arrays.

I have read the manuals, but it doesn't work as described, or likely I
do something wrong.  Are there any working examples ?

My config is simple -

[root@memverge2 anton]# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md118 : active raid5 nvme0n1[4] nvme3n1[0](S) nvme7n1[3] nvme5n1[1]
      3125362688 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU]
      bitmap: 0/12 pages [0KB], 65536KB chunk

md117 : active raid5 nvme6n1[4] nvme4n1[1] nvme2n1[0]
      3125362688 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU]
      bitmap: 0/12 pages [0KB], 65536KB chunk

unused devices: <none>
[root@memverge2 anton]#
[root@memverge2 anton]# cat /etc/mdadm.conf
ARRAY /dev/md/117 level=raid5 num-devices=3 metadata=1.2
UUID=0fab82cd:36301f6c:6ec78c95:7f092d4c spare-group=group1
   devices=/dev/nvme2n1,/dev/nvme4n1,/dev/nvme6n1
ARRAY /dev/md/118 level=raid5 num-devices=3 metadata=1.2 spares=1
UUID=2d7290e5:c3ccbfe9:004cb182:3e325714 spare-group=group1
   devices=/dev/nvme0n1,/dev/nvme3n1,/dev/nvme5n1,/dev/nvme7n1
[root@memverge2 anton]#
[root@memverge2 anton]# systemctl restart mdmonitor
[root@memverge2 anton]#
[root@memverge2 anton]# mdadm /dev/md117 --fail /dev/nvme6n1
[root@memverge2 anton]#
[root@memverge2 anton]# mdadm -D /dev/md117
/dev/md117:
           Version : 1.2
     Creation Time : Tue Jan 28 11:08:21 2025
        Raid Level : raid5
        Array Size : 3125362688 (2.91 TiB 3.20 TB)
     Used Dev Size : 1562681344 (1490.29 GiB 1600.19 GB)
      Raid Devices : 3
     Total Devices : 3
       Persistence : Superblock is persistent

     Intent Bitmap : Internal

       Update Time : Tue Jan 28 15:29:47 2025
             State : clean, degraded
    Active Devices : 2
   Working Devices : 2
    Failed Devices : 1
     Spare Devices : 0

            Layout : left-symmetric
        Chunk Size : 512K

Consistency Policy : bitmap

              Name : memverge2:117  (local to host memverge2)
              UUID : 0fab82cd:36301f6c:6ec78c95:7f092d4c
            Events : 849

    Number   Major   Minor   RaidDevice State
       0     259        5        0      active sync   /dev/nvme2n1
       1     259        0        1      active sync   /dev/nvme4n1
       -       0        0        2      removed

       4     259        1        -      faulty   /dev/nvme6n1
[root@memverge2 anton]#

And there is no raid117 recovering using spare from raid118.

Anton

пн, 27 янв. 2025 г. в 23:18, Dragan Milivojević <galileo@xxxxxxxxxxx>:
>
> Read the man page (spare-group parameter), mdadm allows for one spare
> drive to be used
> for multiple arrays.
>
> On Mon, 27 Jan 2025 at 15:15, Anton Gavriliuk <antosha20xx@xxxxxxxxx> wrote:
> >
> > How to add a spare disk to raid50 which consists of several raid5
> > (7+1) ? It would be more economical and flexible than adding spare
> > disks to each raid5 in raid50.
> >
> > Anton
> >





[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux