Re: Software RAID complete drives or individual partitions

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]



Chris,

I've used software raid quite a bit, and have developed a few rules of thumb, hope these help!

- Use one raid array, generally md0, for /boot, and one for LVM, md1. This allows the individual drives to be mounted and read on another server for recovery if you're using RAID1.

This is generally how the drives in a RAID1 array would look. This is from a CentOS 5 server, so /boot is only 100MB, on CentOS 6 it would be 500MB. 

# fdisk -l /dev/sda
Disk /dev/sda: 250.0 GB, 250059350016 bytes
255 heads, 63 sectors/track, 30401 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1          13      104391   fd  Linux raid autodetect
/dev/sda2              14       30401   244091610   fd  Linux raid autodetect

# cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 sdb1[1] sda1[0]
      104320 blocks [2/2] [UU]

md1 : active raid1 sdb2[1] sda2[0]
      244091520 blocks [2/2] [UU]


- Avoid software RAID5 or 6, only use it for RAID1 or 10. Software RAID5 performance can be abysmal, because of the parity calculations and the fact that each write to the array requires that all drives be read and written. Older hardware raid controllers can be pretty cheap on eBay, I'm using an old 3Ware on my home CentOS server. Avoid hostraid adapters, these are just software raid in the controller rather than the OS. Even with hardware raid performance won't be near as good as RAID10, I generally only use RAID5 or 6 for partitions that hold backups.

If you are using drives over 1TB, consider partitioning the drives into smaller chunks, say around 500MB, and creating multiple arrays. That way if you get a read error on one sector that causes one of the raid partitions to be marked as bad, only that partition needs to be rebuild rather than the whole drive.




Mark Snyder 
Highland Solutions 
200 South Michigan Ave., Suite 1000 
Chicago, IL 60604 
http://www.highlandsolutions.com 



----- Original Message -----
From: "Chris Weisiger" <cweisiger@xxxxxxxxxxxxx>
To: centos@xxxxxxxxxx
Sent: Monday, March 4, 2013 9:53:48 PM
Subject:  Software RAID complete drives or individual partitions

I have been reading about software raid. I configured my first software raid system about a month ago.

I have 4 500 Gig drives configured in RAID 5 configuration with a total of 1.5TB.

Currently I configured the complete individual drivers as software raid, then created a /dev/md0 with the drives

I then created a /file_storage partition on /dev/md0.

I created my /boot / and swap partitions on a non raid drive in my system.

Is the the proper way to configure software raid?
_______________________________________________
CentOS mailing list
CentOS@xxxxxxxxxx
http://lists.centos.org/mailman/listinfo/centos
_______________________________________________
CentOS mailing list
CentOS@xxxxxxxxxx
http://lists.centos.org/mailman/listinfo/centos

[Index of Archives]     [CentOS]     [CentOS Announce]     [CentOS Development]     [CentOS ARM Devel]     [CentOS Docs]     [CentOS Virtualization]     [Carrier Grade Linux]     [Linux Media]     [Asterisk]     [DCCP]     [Netdev]     [Xorg]     [Linux USB]
  Powered by Linux