Re: Reassembling my RAID 1 array

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 26/10/18 22:43, Diederik de Haas wrote:
> Hi,
> 
> I had 2 x 3TB RAID1 arrays (with 4 3TB drives), md0 consisting of sdb1 and 
> sdc1 and md1 consisting of sdd1 and sde1.
> 
> My md0 was getting full, so I bought 2x8TB (sdf1 and sdg1) drives and thought 
> I could just add them so md0 would be 11TB in size. Apparently it doesn't work 
> that way and I just had 4 drives containing the same data and my md0 still was 
> only 3TB big.

That would be a raid-10 you were thinking of - raid-0 an 8 and a 3, and
then mirror your two 8,3 pairs. Personally, I wouldn't think that a good
idea.
> 
> So I figured that if I'd 'fail' and then 'remove' the 3TB drives from the 
> array and then enlarged the partitions/arrays to 8TB then I'd get md0 to 8TB 
> and then I could repurpose the 2x3TB drives. 
> That seemed to work, until I rebooted.

Um - ouch. Did you do an madadm resize, followed by a partition resize?
Because yes that should have worked.
> 
> The issue is that mdadm still looks at sd[bc]1 for md0 instead of sd[fg]1 and 
> all 4 partitions have the same GUID

That seems well weird. The whole point of a GUID is it is unique per
device, so 4 partitions with the same guid sounds well wrong. However,
looking at your output, I think you mean they all have the same *array*
guid, which is correct - they're all the same array.

What happens if you try to manually assemble sd[fg] into an array? b and
c are both spares, so you might well get a working array from f and g.

What happens if you remove sd[bc] from the computer? Will md0 re-appear?
Oh - and you DID make sure that the resync onto sd[fg] was complete
before you started messing about removing the other two drives? With
drives that size a re-sync can take ages ...

One last point. What do you get with "mdadm --version"? What's happening
sounds suspiciously similar to a known bug with 3.3 or 3.4 - can't
remember the details because we're now on (iirc) the verge of releasing 4.2.

> 
> # cat /proc/mdstat
> Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] 
> [raid10] 
> md1 : active raid1 sde1[0] sdd1[1]
>       2929992704 blocks super 1.2 [2/2] [UU]
>       bitmap: 0/22 pages [0KB], 65536KB chunk
> 
> md0 : inactive sdb1[1](S) sdc1[0](S)
>       5859985409 blocks super 1.2
>        
> unused devices: <none>
> 
> I've attached far more info about my drives/partitions/array in 'raid.status'.
> I have no reason to think anything is wrong with md1, only included it for 
> completeness.
> 
> So I'd like to know what I need to do in order for md0 to point to sd[fg]1 
> partitions. Since those drives are way larger, I'm guessing I really need to 
> prevent some kind of syncing which I had when I first added the larger disks. 
> I've used and written data to those larger drives which I'd really like to 
> keep.
> I _think_ that I could technically repartition and/or zero out the sd[bc]1 
> partitions/drives and thereby 'fix' it, but I'd rather not do anything 
> destructive before getting more knowledgeable people's opinion.
> And I'd also like learn the proper way to do it (and how I should've done it 
> to begin with).
> 
> Is my initial idea at all possible with mdadm (combining 4 drives so that 
> 'small size' + 'large size' = total size, ie 11TB in my case)?
> Or is the only (or best) way to create 2 different md devices and combine them 
> with LVM?
> 
Combining "small size" with "large size" is eminently possible, as
mentioned above. I just wouldn't do it. If you want to do something like
that, you're better off combining your two newly redundant 3TB drives
with those for md1, and creating a raid-10 or raid-6. Either of those
will give you a 6TB array, or if you went to raid-5 you could have 9TB.

Cheers,
Wol



[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux