Reassembling my RAID 1 array

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I had 2 x 3TB RAID1 arrays (with 4 3TB drives), md0 consisting of sdb1 and 
sdc1 and md1 consisting of sdd1 and sde1.

My md0 was getting full, so I bought 2x8TB (sdf1 and sdg1) drives and thought 
I could just add them so md0 would be 11TB in size. Apparently it doesn't work 
that way and I just had 4 drives containing the same data and my md0 still was 
only 3TB big.

So I figured that if I'd 'fail' and then 'remove' the 3TB drives from the 
array and then enlarged the partitions/arrays to 8TB then I'd get md0 to 8TB 
and then I could repurpose the 2x3TB drives. 
That seemed to work, until I rebooted.

The issue is that mdadm still looks at sd[bc]1 for md0 instead of sd[fg]1 and 
all 4 partitions have the same GUID

# cat /proc/mdstat
Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] 
[raid10] 
md1 : active raid1 sde1[0] sdd1[1]
      2929992704 blocks super 1.2 [2/2] [UU]
      bitmap: 0/22 pages [0KB], 65536KB chunk

md0 : inactive sdb1[1](S) sdc1[0](S)
      5859985409 blocks super 1.2
       
unused devices: <none>

I've attached far more info about my drives/partitions/array in 'raid.status'.
I have no reason to think anything is wrong with md1, only included it for 
completeness.

So I'd like to know what I need to do in order for md0 to point to sd[fg]1 
partitions. Since those drives are way larger, I'm guessing I really need to 
prevent some kind of syncing which I had when I first added the larger disks. 
I've used and written data to those larger drives which I'd really like to 
keep.
I _think_ that I could technically repartition and/or zero out the sd[bc]1 
partitions/drives and thereby 'fix' it, but I'd rather not do anything 
destructive before getting more knowledgeable people's opinion.
And I'd also like learn the proper way to do it (and how I should've done it 
to begin with).

Is my initial idea at all possible with mdadm (combining 4 drives so that 
'small size' + 'large size' = total size, ie 11TB in my case)?
Or is the only (or best) way to create 2 different md devices and combine them 
with LVM?

TIA for any help,
  Diederik
# mdadm --examine /dev/sd[de]1
/dev/sdd1:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x1
     Array UUID : c93f2429:d281bb4a:911c1f4a:9d3deab5
           Name : cknowsvr01:1  (local to host cknowsvr01)
  Creation Time : Sat Jan  7 02:37:37 2017
     Raid Level : raid1
   Raid Devices : 2

 Avail Dev Size : 5859985409 (2794.26 GiB 3000.31 GB)
     Array Size : 2929992704 (2794.26 GiB 3000.31 GB)
  Used Dev Size : 5859985408 (2794.26 GiB 3000.31 GB)
    Data Offset : 262144 sectors
   Super Offset : 8 sectors
          State : clean
    Device UUID : 9009fdb5:db68b6d0:10f177e0:1c0a65a8

Internal Bitmap : 8 sectors from superblock
    Update Time : Thu Oct 25 23:19:26 2018
  Bad Block Log : 512 entries available at offset 72 sectors
       Checksum : ae65b894 - correct
         Events : 50630


   Device Role : Active device 1
   Array State : AA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sde1:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x1
     Array UUID : c93f2429:d281bb4a:911c1f4a:9d3deab5
           Name : cknowsvr01:1  (local to host cknowsvr01)
  Creation Time : Sat Jan  7 02:37:37 2017
     Raid Level : raid1
   Raid Devices : 2

 Avail Dev Size : 5859985409 (2794.26 GiB 3000.31 GB)
     Array Size : 2929992704 (2794.26 GiB 3000.31 GB)
  Used Dev Size : 5859985408 (2794.26 GiB 3000.31 GB)
    Data Offset : 262144 sectors
   Super Offset : 8 sectors
          State : clean
    Device UUID : cbee2bed:243e0397:0dd42469:21c177f2

Internal Bitmap : 8 sectors from superblock
    Update Time : Thu Oct 25 23:19:26 2018
  Bad Block Log : 512 entries available at offset 72 sectors
       Checksum : 7ea10d19 - correct
         Events : 50630


   Device Role : Active device 0
   Array State : AA ('A' == active, '.' == missing, 'R' == replacing)


===================================================================================


# mdadm --examine /dev/sd[bcfg]1
/dev/sdb1:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x1
     Array UUID : 50c9e78d:64492e45:018feb15:755a2e08
           Name : cknowsvr01:0  (local to host cknowsvr01)
  Creation Time : Sat Jul 23 04:12:29 2016
     Raid Level : raid1
   Raid Devices : 4

 Avail Dev Size : 5859985409 (2794.26 GiB 3000.31 GB)
     Array Size : 2929992704 (2794.26 GiB 3000.31 GB)
  Used Dev Size : 5859985408 (2794.26 GiB 3000.31 GB)
    Data Offset : 262144 sectors
   Super Offset : 8 sectors
          State : clean
    Device UUID : cd3f2f97:a92635b9:90ecf9c5:aa863cfb

Internal Bitmap : 8 sectors from superblock
    Update Time : Sat Sep 15 06:40:59 2018
  Bad Block Log : 512 entries available at offset 72 sectors
       Checksum : 2ba77ae4 - correct
         Events : 87405


   Device Role : Active device 1
   Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdc1:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x1
     Array UUID : 50c9e78d:64492e45:018feb15:755a2e08
           Name : cknowsvr01:0  (local to host cknowsvr01)
  Creation Time : Sat Jul 23 04:12:29 2016
     Raid Level : raid1
   Raid Devices : 4

 Avail Dev Size : 5859985409 (2794.26 GiB 3000.31 GB)
     Array Size : 2929992704 (2794.26 GiB 3000.31 GB)
  Used Dev Size : 5859985408 (2794.26 GiB 3000.31 GB)
    Data Offset : 262144 sectors
   Super Offset : 8 sectors
          State : clean
    Device UUID : 9f2c41d4:68a6a4c2:d7526447:f660fc98

Internal Bitmap : 8 sectors from superblock
    Update Time : Sat Sep 15 16:12:48 2018
  Bad Block Log : 512 entries available at offset 72 sectors
       Checksum : 9150ae0e - correct
         Events : 87407


   Device Role : Active device 0
   Array State : A.AA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdf1:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x1
     Array UUID : 50c9e78d:64492e45:018feb15:755a2e08
           Name : cknowsvr01:0  (local to host cknowsvr01)
  Creation Time : Sat Jul 23 04:12:29 2016
     Raid Level : raid1
   Raid Devices : 2

 Avail Dev Size : 15627788943 (7451.91 GiB 8001.43 GB)
     Array Size : 7813894471 (7451.91 GiB 8001.43 GB)
    Data Offset : 262144 sectors
   Super Offset : 8 sectors
   Unused Space : before=262056 sectors, after=0 sectors
          State : clean
    Device UUID : 4f04026e:0292f69e:44fdde92:944bb886

Internal Bitmap : 8 sectors from superblock
    Update Time : Thu Oct 25 17:51:13 2018
  Bad Block Log : 512 entries available at offset 72 sectors
       Checksum : cd37c7a0 - correct
         Events : 102014


   Device Role : Active device 1
   Array State : AA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdg1:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x1
     Array UUID : 50c9e78d:64492e45:018feb15:755a2e08
           Name : cknowsvr01:0  (local to host cknowsvr01)
  Creation Time : Sat Jul 23 04:12:29 2016
     Raid Level : raid1
   Raid Devices : 2

 Avail Dev Size : 15627788943 (7451.91 GiB 8001.43 GB)
     Array Size : 7813894471 (7451.91 GiB 8001.43 GB)
    Data Offset : 262144 sectors
   Super Offset : 8 sectors
   Unused Space : before=262056 sectors, after=0 sectors
          State : clean
    Device UUID : d4ad5a8f:79ae80ea:4e0b0df9:915d50bb

Internal Bitmap : 8 sectors from superblock
    Update Time : Thu Oct 25 17:51:13 2018
  Bad Block Log : 512 entries available at offset 72 sectors
       Checksum : d4e0ada5 - correct
         Events : 102014


   Device Role : Active device 0
   Array State : AA ('A' == active, '.' == missing, 'R' == replacing)


===================================================================================


# lsblk --output NAME,TYPE,SIZE,FSTYPE,MOUNTPOINT,MODEL,UUID
NAME                                              TYPE    SIZE FSTYPE            MOUNTPOINT MODEL            UUID
sda                                               disk  119.2G                              TS128GMTS400S    
|-sda1                                            part      2G ext4                                          e9745bfe-69bb-453d-93bb-6f728204e351
|-sda2                                            part     16G ext4              /                           8008723b-668f-43f6-b432-8c56ed53f48a
`-sda3                                            part     16G swap              [SWAP]                      5834b4ce-7fa8-4940-842c-7636ffe6065e
sdb                                               disk    2.7T                              WDC WD30EFRX-68E 
`-sdb1                                            part    2.7T linux_raid_member                             50c9e78d-6449-2e45-018f-eb15755a2e08
sdc                                               disk    2.7T                              WDC WD30EFRX-68E 
`-sdc1                                            part    2.7T linux_raid_member                             50c9e78d-6449-2e45-018f-eb15755a2e08
sdd                                               disk    2.7T                              WDC WD30EFRX-68E 
`-sdd1                                            part    2.7T linux_raid_member                             c93f2429-d281-bb4a-911c-1f4a9d3deab5
  `-md1                                           raid1   2.7T LVM2_member                                   M2cmdG-cDUD-eCrQ-jt40-Yte5-enDl-pfFY8i
    |-vgXen-tradestation.home.cknow.org--swap     lvm     128M                                               
    |-vgXen-tradestation.home.cknow.org--disk     lvm       6G                                               
    |-vgXen-vga--passthrough.home.cknow.org--swap lvm       4G                                               
    |-vgXen-vga--passthrough.home.cknow.org--disk lvm       8G                                               
    `-vgXen-lvBackup                              lvm     2.1T                                               
sde                                               disk    2.7T                              WDC WD30EFRX-68E 
`-sde1                                            part    2.7T linux_raid_member                             c93f2429-d281-bb4a-911c-1f4a9d3deab5
  `-md1                                           raid1   2.7T LVM2_member                                   M2cmdG-cDUD-eCrQ-jt40-Yte5-enDl-pfFY8i
    |-vgXen-tradestation.home.cknow.org--swap     lvm     128M                                               
    |-vgXen-tradestation.home.cknow.org--disk     lvm       6G                                               
    |-vgXen-vga--passthrough.home.cknow.org--swap lvm       4G                                               
    |-vgXen-vga--passthrough.home.cknow.org--disk lvm       8G                                               
    `-vgXen-lvBackup                              lvm     2.1T                                               
sdf                                               disk    7.3T                              WDC WD80EFZX-68U 
`-sdf1                                            part    7.3T linux_raid_member                             50c9e78d-6449-2e45-018f-eb15755a2e08
sdg                                               disk    7.3T                              WDC WD80EFZX-68U 
`-sdg1                                            part    7.3T linux_raid_member                             50c9e78d-6449-2e45-018f-eb15755a2e08


===================================================================================



# mdadm --detail /dev/md0
/dev/md0:
        Version : 1.2
     Raid Level : raid0
  Total Devices : 2
    Persistence : Superblock is persistent

          State : inactive

           Name : cknowsvr01:0  (local to host cknowsvr01)
           UUID : 50c9e78d:64492e45:018feb15:755a2e08
         Events : 87407

    Number   Major   Minor   RaidDevice

       -       8       33        -        /dev/sdc1
       -       8       17        -        /dev/sdb1

# mdadm --detail /dev/md1
/dev/md1:
        Version : 1.2
  Creation Time : Sat Jan  7 02:37:37 2017
     Raid Level : raid1
     Array Size : 2929992704 (2794.26 GiB 3000.31 GB)
  Used Dev Size : 2929992704 (2794.26 GiB 3000.31 GB)
   Raid Devices : 2
  Total Devices : 2
    Persistence : Superblock is persistent

  Intent Bitmap : Internal

    Update Time : Thu Oct 25 23:19:26 2018
          State : clean 
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

           Name : cknowsvr01:1  (local to host cknowsvr01)
           UUID : c93f2429:d281bb4a:911c1f4a:9d3deab5
         Events : 50630

    Number   Major   Minor   RaidDevice State
       0       8       65        0      active sync   /dev/sde1
       1       8       49        1      active sync   /dev/sdd1

# cat /proc/mdstat
Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10] 
md1 : active raid1 sde1[0] sdd1[1]
      2929992704 blocks super 1.2 [2/2] [UU]
      bitmap: 0/22 pages [0KB], 65536KB chunk

md0 : inactive sdb1[1](S) sdc1[0](S)
      5859985409 blocks super 1.2
       
unused devices: <none>


===================================================================================


# ./lsdrv

PCI [ahci] 00:11.4 SATA controller: Intel Corporation C610/X99 series chipset sSATA Controller [AHCI mode] (rev 05)
└scsi 3:0:0:0 ATA      TS128GMTS400S    {03008300E13881050255}
 └sda 119.24g [8:0] Partitioned (dos)
  ├sda1 2.00g [8:1] ext4 'boot-part' {e9745bfe-69bb-453d-93bb-6f728204e351}
  ├sda2 16.00g [8:2] ext4 'root-part' {8008723b-668f-43f6-b432-8c56ed53f48a}
  │└Mounted as /dev/sda2 @ /
  └sda3 16.00g [8:3] swap 'swap-part' {5834b4ce-7fa8-4940-842c-7636ffe6065e}
PCI [ahci] 00:1f.2 SATA controller: Intel Corporation C610/X99 series chipset 6-Port SATA Controller [AHCI mode] (rev 05)
├scsi 4:0:0:0 ATA      WDC WD30EFRX-68E {WD-WCC4N1DY46TF}
│└sdb 2.73t [8:16] Partitioned (gpt)
│ └sdb1 2.73t [8:17] MD  (none/) (w/ sdc1) spare 'cknowsvr01:0' {50c9e78d-6449-2e45-018f-eb15755a2e08}
│  └md0 0.00k [9:0] MD v1.2  () inactive, None (None) None {50c9e78d:64492e45:018feb15:755a2e08}
│                   Empty/Unknown
├scsi 5:0:0:0 ATA      WDC WD30EFRX-68E {WD-WCC4N1TJ0CU4}
│└sdc 2.73t [8:32] Partitioned (gpt)
│ └sdc1 2.73t [8:33] MD  (none/) (w/ sdb1) spare 'cknowsvr01:0' {50c9e78d-6449-2e45-018f-eb15755a2e08}
│  └md0 0.00k [9:0] MD v1.2  () inactive, None (None) None {50c9e78d:64492e45:018feb15:755a2e08}
│                   Empty/Unknown
├scsi 6:0:0:0 ATA      WDC WD30EFRX-68E {WD-WCC4N1TJ0034}
│└sdd 2.73t [8:48] Partitioned (gpt)
│ └sdd1 2.73t [8:49] MD raid1 (1/2) (w/ sde1) in_sync 'cknowsvr01:1' {c93f2429-d281-bb4a-911c-1f4a9d3deab5}
│  └md1 2.73t [9:1] MD v1.2 raid1 (2) clean {c93f2429:d281bb4a:911c1f4a:9d3deab5}
│   │               PV LVM2_member 2.07t used, 676.13g free {M2cmdG-cDUD-eCrQ-jt40-Yte5-enDl-pfFY8i}
│   └VG vgXen 2.73t 676.13g free {6Ud6YT-CTqU-ZDYS-ZxJ8-c8Vt-Hjmm-lbMD1c}
│    ├dm-4 2.05t [253:4] LV lvBackup ext4 {ba6424fb-b2da-4546-bf54-f54b5fea25cb}
│    ├dm-1 6.00g [253:1] LV tradestation.home.cknow.org-disk LV tradestation.home.cknow.org-disk ext3 {5d28d8cc-02f5-4eaa-9be7-c71bc09c8c39}
│    ├dm-1 6.00g [253:1] LV tradestation.home.cknow.org-disk LV tradestation.home.cknow.org-disk ext3 {5d28d8cc-02f5-4eaa-9be7-c71bc09c8c39}
├scsi 7:0:0:0 ATA      WDC WD30EFRX-68E {WD-WMC4N0N4NRCA}
│└sde 2.73t [8:64] Partitioned (gpt)
│ └sde1 2.73t [8:65] MD raid1 (0/2) (w/ sdd1) in_sync 'cknowsvr01:1' {c93f2429-d281-bb4a-911c-1f4a9d3deab5}
│  └md1 2.73t [9:1] MD v1.2 raid1 (2) clean {c93f2429:d281bb4a:911c1f4a:9d3deab5}
│                   PV LVM2_member 2.07t used, 676.13g free {M2cmdG-cDUD-eCrQ-jt40-Yte5-enDl-pfFY8i}
├scsi 8:0:0:0 ATA      WDC WD80EFZX-68U {R6GL333Y}
│└sdf 7.28t [8:80] Partitioned (gpt)
│ └sdf1 7.28t [8:81] MD raid1 (2) inactive 'cknowsvr01:0' {50c9e78d-6449-2e45-018f-eb15755a2e08}
└scsi 9:0:0:0 ATA      WDC WD80EFZX-68U {R6GLEWGY}
 └sdg 7.28t [8:96] Partitioned (gpt)
  └sdg1 7.28t [8:97] MD raid1 (2) inactive 'cknowsvr01:0' {50c9e78d-6449-2e45-018f-eb15755a2e08}
Other Block Devices
├dm-0 128.00m [253:0] LV tradestation.home.cknow.org-swap swap {39e63917-e876-4b5e-9365-bd2fa3107964}
├dm-2 4.00g [253:2] LV vga-passthrough.home.cknow.org-swap swap {a1290498-103e-485e-8809-513e3b0690a4}
└dm-3 8.00g [253:3] LV vga-passthrough.home.cknow.org-disk ext4 {2ee01cf0-e45a-49c0-ae37-a5d23da2e5cb}

Attachment: signature.asc
Description: This is a digitally signed message part.


[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux