Re: Reassembling my RAID 1 array

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On zaterdag 27 oktober 2018 16:39:20 CEST Wols Lists wrote:
> > My md0 was getting full, so I bought 2x8TB (sdf1 and sdg1) drives and
> > thought I could just add them so md0 would be 11TB in size. Apparently it
> > doesn't work that way and I just had 4 drives containing the same data
> > and my md0 still was only 3TB big.
> 
> That would be a raid-10 you were thinking of - raid-0 an 8 and a 3, and
> then mirror your two 8,3 pairs. Personally, I wouldn't think that a good
> idea.

Ok, thx. It may be sth to try/experiment with in the future, but for now I 
agree.

> > So I figured that if I'd 'fail' and then 'remove' the 3TB drives from the
> > array and then enlarged the partitions/arrays to 8TB then I'd get md0 to
> > 8TB and then I could repurpose the 2x3TB drives.
> > That seemed to work, until I rebooted.
> 
> Um - ouch. Did you do an madadm resize, followed by a partition resize?
> Because yes that should have worked.

Yes. I thought about including part of my ~/.bash_history but as I did most 
things within tmux, it is incomplete and therefor could/would probably give 
the wrong impression.
I did do "mdadm --grow /dev/md0 --size=max"

> > The issue is that mdadm still looks at sd[bc]1 for md0 instead of sd[fg]1
> > and all 4 partitions have the same GUID
> 
> That seems well weird. The whole point of a GUID is it is unique per
> device, so 4 partitions with the same guid sounds well wrong. However,
> looking at your output, I think you mean they all have the same *array*
> guid, which is correct - they're all the same array.

I forgot to include another (probably) useful output:
# mdadm --examine --scan
ARRAY /dev/md/0  metadata=1.2 UUID=50c9e78d:64492e45:018feb15:755a2e08 
name=cknowsvr01:0
ARRAY /dev/md/1  metadata=1.2 UUID=c93f2429:d281bb4a:911c1f4a:9d3deab5 
name=cknowsvr01:1
ARRAY /dev/md/0  metadata=1.2 UUID=50c9e78d:64492e45:018feb15:755a2e08 
name=cknowsvr01:0

So mdadm detects md0 twice.

> What happens if you try to manually assemble sd[fg] into an array? b and
> c are both spares, so you might well get a working array from f and g.
> 
> What happens if you remove sd[bc] from the computer? Will md0 re-appear?
> Oh - and you DID make sure that the resync onto sd[fg] was complete
> before you started messing about removing the other two drives? With
> drives that size a re-sync can take ages ...

It took indeed several hours and I had "watch -n 3 cat /proc/mdstat" in one of 
my tmux windows.

What is the exact command I should use to assemble sd[fg] in an array? 

And I think I explicitly don't want to have sd[bc] as spares as they are the 
3TB drives and I'm afraid that would cause a sync which could very well result 
in errors. Trying to prevent a catastrophic error is the main reason for me 
reaching out to this list.

I can't physically remove sd[bc] from the computer right now. Would 
repartitioning and/or zero-ing out the drives have the same effect? Or is 
there and/or should I use another method?
My reasoning is that if mdadm doesn't see the bitmap (?) and/or no longer sees 
the drives as RAID partitions, it also won't bother with it.
Getting my guesswork out the the commands I will/need to do is another reason 
for contacting the list.

> One last point. What do you get with "mdadm --version"? 

mdadm - v3.4 - 28th January 2016
(Debian Stretch, version 3.4-4+b1)

> Combining "small size" with "large size" is eminently possible, as
> mentioned above. I just wouldn't do it. If you want to do something like
> that, you're better off combining your two newly redundant 3TB drives
> with those for md1, and creating a raid-10 or raid-6. Either of those
> will give you a 6TB array, or if you went to raid-5 you could have 9TB.

Thx. May try that in the future, but right now I'm working on getting things 
setup/sorted out to (finally!) make a proper backup. After that I'd feel much 
better about doing some experimenting :)

> Cheers,
> Wol

Thanks a lot for your response,
  Diederik

Attachment: signature.asc
Description: This is a digitally signed message part.


[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux