Re: MegaRAID DDF

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Jul 3, 2018, at 5:51 PM, dpr <dpr@xxxxxxx> wrote:
> 
> On Jul 2, 2018, at 5:06 PM, Gandalf Corvotempesta <gandalf.corvotempesta@xxxxxxxxx> wrote:
>> 
>> I'm still trying to figure out if mdadm is able to read and use a
>> MegaRAID RAID-6 array (it should be DDF compatible).
>> 
>> I only have production servers, to I can safely test on my own, anyone
>> did this previously?
> 
> I have done this for data recovery — it works fine for read-only but I can’t attest to read/write, scrub, or replacing a drive (I have no reason to believe it wouldn’t work, but have never tried).

I had reason to do this yesterday (on a system containing backups — annoying but not tragic if all the data goes away), and it Just Worked for me.  I unplugged the 2 x SFF8087 from the HW RAID card based on LSI 2108 (and remove the card from the chassis), and connected them to a card based on LSI 2308.  When I powered up the system, md discovered the DDF container (md127) and two RAID6 arrays (md126 and md125).

An md resync started immediately (as expected).  LVM detected the VG and LVs, e2fsck -f showed everything clean.

The md resync caused mismatch_cnt to skyrocket.  I let the resync complete, ran another e2fsck -f, and the filesystem was still clean.  I spot-checked the data, and everything looks good.

I’m running raid6check(8) now; it’s not reported any errors nor do I expect any.

Obviously since md will run a resync on the newly-imported array, it is essential that all drives are healthy.

# uname -r
4.4.0-116-generic
# mdadm -q --detail /dev/md127
/dev/md127:
        Version : ddf
     Raid Level : container
  Total Devices : 45

Working Devices : 45

  Member Arrays : /dev/md125 /dev/md126

    Number   Major   Minor   RaidDevice

       0       8       64        -        /dev/sde
       1      65      128        -        /dev/sdy
       2      65      224        -        /dev/sdae
       3       8      144        -        /dev/sdj
…
# mdadm -q --detail /dev/md125
/dev/md125:
      Container : /dev/md/ddf0, member 1
     Raid Level : raid6
     Array Size : 55664044032 (53085.37 GiB 56999.98 GB)
  Used Dev Size : 2929686528 (2793.97 GiB 3000.00 GB)
   Raid Devices : 21
  Total Devices : 21

          State : active
 Active Devices : 21
Working Devices : 21
 Failed Devices : 0
  Spare Devices : 0

         Layout : ddf-N-continue
     Chunk Size : 64K

    Number   Major   Minor   RaidDevice State
      12      65      160        0      active sync   /dev/sdaa
      14      65      176        1      active sync   /dev/sdab
      11      65      192        2      active sync   /dev/sdac

…
# mdadm -q --detail /dev/md126
/dev/md126:
      Container : /dev/md/ddf0, member 0
     Raid Level : raid6
     Array Size : 64453103616 (61467.27 GiB 65999.98 GB)
  Used Dev Size : 2929686528 (2793.97 GiB 3000.00 GB)
   Raid Devices : 24
  Total Devices : 24

          State : clean
 Active Devices : 24
Working Devices : 24
 Failed Devices : 0
  Spare Devices : 0

         Layout : ddf-N-continue
     Chunk Size : 64K

    Number   Major   Minor   RaidDevice State
       7       8       32        0      active sync   /dev/sdc
       6       8       48        1      active sync   /dev/sdd
       0       8       64        2      active sync   /dev/sde

…
# cat /proc/mdstat 
Personalities : [raid1] [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid10] 
md125 : active raid6 sdaf[20] sdat[19] sdah[18] sdak[17] sdaj[16] sdam[15] sdab[14] sdag[13] sdaa[12] sdac[11] sdan[10] sdal[9] sdao[8] sdap[7] sdau[6] sdad[5] sdaq[4] sdas[3] sdar[2] sdai[1] sdae[0]
      55664044032 blocks super external:/md127/1 level 6, 64k chunk, algorithm 10 [21/21] [UUUUUUUUUUUUUUUUUUUUU]
      
md126 : active raid6 sdx[23] sds[22] sdh[21] sdi[20] sdp[19] sdr[18] sdq[17] sdz[16] sdm[15] sdv[14] sdw[13] sdt[12] sdk[11] sdg[10] sdl[9] sdn[8] sdc[7] sdd[6] sdu[5] sdf[4] sdo[3] sdj[2] sdy[1] sde[0]
      64453103616 blocks super external:/md127/0 level 6, 64k chunk, algorithm 10 [24/24] [UUUUUUUUUUUUUUUUUUUUUUUU]
      
md127 : inactive sdx[44](S) sdaf[43](S) sdat[42](S) sdah[41](S) sdak[40](S) sds[39](S) sdh[38](S) sdaj[37](S) sdam[36](S) sdi[35](S) sdp[34](S) sdr[33](S) sdq[32](S) sdab[31](S) sdz[30](S) sdag[29](S) sdm[28](S) sdv[27](S) sdaa[26](S) sdac[25](S) sdw[24](S) sdan[23](S) sdt[22](S) sdk[21](S) sdal[20](S) sdg[19](S) sdl[18](S) sdao[17](S) sdn[16](S) sdap[15](S) sdau[14](S) sdad[13](S) sdc[12](S) sdaq[11](S) sdd[10](S) sdu[9](S) sdas[8](S) sdar[7](S) sdf[6](S) sdai[5](S) sdo[4](S) sdj[3](S) sdae[2](S) sdy[1](S) sde[0](S)
      26102520 blocks super external:ddf


> 
> The only “catch” is to check the RAID6 algorithm, I’ve had LSI cards do ddf-N-continue and ddf-N-restart.  But if you load the DDF container and then start the array(s), it should Just Work.  In my case, I was doing data recovery, so I had to “create” the array and provide all parameters.
> 
> I highly suggest you use overlay files as described in the Wiki.  Once you have something you’re confident will work, remove the overlays and run those same commands on the raw disks.
> 
> Thanks-
> Dan
> 





[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux