Re: Correct way to create multiple RAID volumes with hot-spare?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tuesday August 22, steve.cousins@xxxxxxxxx wrote:
> Hi,
> 
> I have a set of 11 500 GB drives. Currently each has two 250 GB 
> partitions (/dev/sd?1 and /dev/sd?2).  I have two RAID6 arrays set up, 
> each with 10 drives and then I wanted the 11th drive to be a hot-spare. 
>   When I originally created the array I used mdadm and only specified 
> the use of 10 drives since the 11th one wasn't even a thought at the 
> time (I didn't think I could get an 11th drive in the case).  Now I can 
> manually add in the 11th drive partitions into each of the arrays and 
> they show up as a spares but on reboot they aren't part of the set 
> anymore.  I have added them into /etc/mdadm.conf and the partition type 
> is set to be  Software RAID (fd).

Can you show us exactly what /etc/mdadm.conf contains?
And what kernel messages do you get when it assembled the array but
leaves off the spare?

> 
> Maybe I shouldn't be splitting the drives up into partitions.  I did 
> this due to issues with volumes greater than 2TB.  Maybe this isn't an 
> issue anymore and I should just rebuild the array from scratch with 
> single partitions.  Or should there even be partitions? Should I just 
> use /dev/sd[abcdefghijk] ?
> 

I tend to just use whole drives, but your set up should work fine.
md/raid isn't limited to 2TB, but some filesystems might have size
issues (though i think even ext2 gots to at least 8 TB these days).

> On a side note, maybe for another thread, the arrays work great until a 
> reboot (using 'shutdown' or 'reboot' and they seem to be shutting down 
> the md system correctly).  Sometimes one or even two (yikes!) partitions 
> in each array go offline and I have to mdadm /dev/md0 -a /dev/sdx1 it 
> back in.  Do others experience this regularly with RAID6?  Is RAID6 not 
> ready for prime time?

This doesn't sound like a raid issues.  Do you have kernel logs of
what happens when the array is reassembled and some drive(s) are
missing?

NeilBrown
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux