Re: Software based SATA RAID-5 expandable arrays?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> Why do I use RAID6?  For the extra redundancy 

I've been thinking about RAID6 too, having been bitten a couple of times.... the only disadvantage that I can see at the moment is that you can't convert and grow it... ie... I can't convert from a 4 drive RAID5 array to a 5 drive RAID6 one when I add an additional drive... I also don't think that you can grow a RAID6 array at the moment - I'd want to add additional drives over a few months as they come on sale.... Or am I wrong on both counts?


Graham

----- Original Message ----
From: Daniel Korstad <dan@xxxxxxxxxxx>
To: Michael <big_green_jelly_bean@xxxxxxxxx>
Cc: linux-raid@xxxxxxxxxxxxxxx
Sent: Monday, 9 July, 2007 3:31:01 PM
Subject: RE: Software based SATA RAID-5 expandable arrays?


You have lots of options.  This will be a lengthy response and will give just some ideas for just some of the options...

For my server, I had started out with a single drive.  I later migrated to migrate to a RAID 1 mirror (after having to deal with reinstalls after drive failures I wised up).  Since I already had an OS that I wanted to keep, my RAID-1 setup was a bit more involved.  I following this migration to get me there;
http://wiki.clug.org.za/wiki/RAID-1_in_a_hurry_with_grub_and_mdadm

Since you are starting from scratch, it should be easier for you.  Most distros will have an installer that will guide you though the process.  When you get to hard drive partitioning, look for an advance option or review and modify partition layout option or something similar otherwise it might just make a guess of what you want and that would not be RAID.  In this advance partition setup, you will be able to create your RAID.  First you make equal size partitions on both physical drives.  For example, first carve out 100M partition on each of the two physical OS drives, than make a RAID 1 md0 with each of this partitions and than make this your /boot.  Do this again for other partitions you want to have RAIDed.  You can do this for /boot, /var, /home, /tmp, /usr.  This is can be nice to have a separations incase a user fills /home/foo with crap and this will not effect other parts of the OS, or if mail spool fills up, it will not hang the OS.  Only problem it determining
 how big to make them during the install.  At a minimum, I would do three partitions; /boot, swap, and /  This means all the others (/var, /home, /tmp, /usr) are in the / partition but this way you don't have to worry about sizing them all correctly. 

For the simplest setup, I would do RAID 1 for /boot (md0), swap (md1), and / (md2)  (Alternatively, your could make a swap file in / and not have a swap partition, tons of options...)  Do you need to RAID your swap?  Well, I would RAID it or make a swap file within a RAID partition.  If you don't and your system is using swap and you lose a drive that has swap information/partition on it, you might have issues depending on how important that information in the failed drive was.  You systems might hang.

After you go through the install and have a bootable OS that is running on mdadm RAID, I would test it to make sure grub was installed correctly to both the physical drives.  If grub is not installed to both drives, and you lose one drive down the road and if that one was the one with grub, you will have a system that will not boot even though it has a second drive with a copy of all the files.  If this were to happen, you can recover by booting with a bootable linux CD or recover disk and manually installing grub too. For example say you only had grub installed to hda and it failed, boot with a live linux cd and type (assuming /dev/hdd is the surviving second drive);
grub
device (hd0) /dev/hdd
root (hd0,0)
setup (hd0)
quit
You say you are using two 500G drives for the OS.  You don't necessary have to use all the space for the OS.  You can make your partitions and take the left over space and throw it into a logical volume.  This logical volume would not be fault tolerant, but would be the sum of the left over capacity from both drives.  For example, you use 100M for /boot and 200G for / and 2G for swap.  Take the rest and make a standard ext3 partition for the remaining space on both drives and put them in a logical volume giving over 500G to play with for non critical crap.

Why do I use RAID6?  For the extra redundancy and I have 10 drives in my arrary.  
I have been an advocate for RAID 6, especially with the every increasing drive capacity and the number of drives in the array is above say six;
http://www.intel.com/technology/magazine/computing/RAID-6-0505.htm 

http://storageadvisors.adaptec.com/2005/10/13/raid-5-pining-for-the-fjords/ 
"...for using RAID-6, the single biggest reason is based on the chance of drive errors during an array rebuild after just a single drive failure. Rebuilding the data on a failed drive requires that all the other data on the other drives be pristine and error free. If there is a single error in a single sector, then the data for the corresponding sector on the replacement drive cannot be reconstructed. Data is lost. In the drive industry, the measurement of how often this occurs is called the Bit Error Rate (BER). Simple calculations will show that the chance of data loss due to BER is much greater than all the other reasons combined. Also, PATA and SATA drives have historically had much greater BERs, i.e., more bit errors per drive, than SCSI and SAS drives, causing some vendors to recommend RAID-6 for SATA drives if theyʼre used for mission critical data."

Since you are using only four drives for your data array, the overhead for RAID6 (two drives for parity) might not be worth it.  

With four drives you would be just fine with a RAID5.
However, I would make a cron for the command to run every once in awhile.  Add this to your crontab...

#check for bad blocks once a week (every Mon at 2:30am)if bad blocks are found, they are corrected from parity information 
30 2 * * Mon echo check /sys/block/md0/md/sync_action

With this, you will keep hidden bad blocks to a minimum and when a drive fails, you won't be likely bitten by a hidden bad block(s) during a rebuild.

For your data array, I would make one partition of Linux raid (FD) and have one partition for the whole drive in each physical drive.  Than create your raid.  

mdadm --create /dev/md3 -l 5 -n 4 /dev/<your data drive1-partition> /dev/<your data drive2-partition> /dev/<your data drive3-partition> /dev/<your data drive4-partition>  <---the /dev/md3 can be what you want and will depend on how many other previous raid arrays you have, so long as you use a number not currently used.  

My filesystem of choice is XFS, but you get to pick your own poison:
mkfs.xfs /-f /dev/md3

Mount the device :
mount /dev/md3 /foo

I would edit your /etc/fstab to have it automounted for each startup.

Dan.

----- Original Message -----
From: Michael 
Sent: Sun, 7/8/2007 3:54pm
To: Daniel Korstad 
Subject: Re: Software based SATA RAID-5 expandable arrays?


Hey Daniel,

Time for business... been struggling the last few days setting up the right drive/OS partition
I got two 500gb drives for the OS... Figured I would mirror them...  Of course 500gb is an insaine amount of space for Linux...
I then will RAID my 4 other drives with RAID 5 or 6...  (I havent seen any distros talk about RAID 6, and from wikipedia it doesnt sound attractive, so why do you use it)

So how the hell do I partition this so that I can use my space to the maximum compacity.




----- Original Message ----
From: Daniel Korstad <dan@xxxxxxxxxxx>
To: big_green_jelly_Bean@xxxxxxxxx
Cc: linux-raid@xxxxxxxxxxxxxxx
Sent: Monday, June 18, 2007 8:46:08 AM
Subject: RE: Software based SATA RAID-5 expandable arrays?


Last I check expanding drives (reshaping the RAID) in a raid set within Windows is not supported.

Significant size is relative I guess, but 4-8 terabytes will not be a problem in either OS.

I run a RAID 6 (Windows does not support this either last I checked).  I started out with 5 drives and have reshaped it to ten drives now.  I have a few 250G (old original drives) and many 500G drives (added and replacement drives) in the set.  Once all the old 250G die off and I replace them with 500G drives I will grow the RAID to the size of its new smallest disk, 500G.  Grow and Reshape are slightly different, both supported in Linux mdadm.  I have tested both with succcess.

I too use my set for media and it is not in use 90% of the time.

I use put this line in my /etc/rc.local to put the drives to sleep after a specified min of inactivity;
hdparm -S 241 /dev/sd*
The values for the -S switch are not intuitive, read the man page.  The value I use (241) put them to standby (spindown) after 30min.  My OS is on EIDE and my RAID set is all SATA, hence the splat for all SATA drives. 

I have been running this for a year now with my RAID set.  It works great and I have had no problems with mdadm waiting on drives to spinup when I access them.

The one caveat, be prepared to wait a few moments if the are all in spindown state before you can access your data.  For me with ten drives, it is always less than a minute, usually 30sec or so.

For a filesystem, I use XFS for my large media files.

Dan.




----- Inline Message Follows -----
To: linux-raid@xxxxxxxxxxxxxxx
From: greenjelly
Subject: Software based SATA RAID-5 expandable arrays?


I am researching my option to build a Media NAS server.  Sorry for the long
message, but I wanted to provide as much details as possible to my problem,
for the best solution.  I have Bolded sections as to save people who don't
have the time to read all of this.

Option 1: Expand My current Dream Machine!
I could buying a RAID-5 Hardware card for my current system (vista ultimate
64 with a extreme 6800 and 1066mb 2 gig RAM).  The Adaptec RAID controller
(model "3805", you can search NewEgg for the infomation) will cost me near
$500 (consume 23w) and support 8 drives (I have 6).  This controller
contains a 800mhz processor with a large cache of memory.  It will support
expandable RAID-5 array!  I would also buy a 750w+ PSU (for the additional
safety and security).  The drives in this machine would be placed in shock
absorbing (noise reduction) 3 slot 4 drive bay containers with fans ( I have
2 of these) and I will be removing a IDE based Pioneer DVD Burner (1 of 3)
because of its flaky performance given the p965 intel chip set lack of
native IDE support and thus the Motherboards Micron SATA to IDE device.  Ive
already installed 4 drives in this machine (on the native MB SATA
controller) only to find a fan fail on me within days of the installation. 
One of the drives went bad (may or may not have to do with the heat).  There
are 5mm between these drives, and I would now replace both fans with higher
RPM ball baring fans for added reliability (more noise).  I would also need
to find a Freeware SMART monitor software which at this time I can not find
for Vista, to warn me of increased temps due to failure of fan, increased
environmental heat, etc.  The only option is commercial SMART monitoring
software (which may not work with the Adaptec RAID adapter.

Option 2: Build a server.
I have a copy of Windows 2003 server, which I have yet to find out if it
supports native software expandable RAID-5 arrays.  I can also use Linux
(which I have very little experience with) but have always wanted to use and
learn. 

To do either of the last two options, I would still need to buy a new power
supply for my current VISTA machine (for added reliability).  The current
PSU is 550w and with a power hungry RADEON, 3 DVD Drives and a X-Fi sound
card... My nerves are getting frayed. 

I would buy a cheap motherboard, processor and 1gig or less of RAM.  Lastly
I would want a VERY large Case.  I have a 7300 NVidia PCI card that was
replaced with a X1950GT on my Home Theater PC so that I may play back
HD/Blue Ray DVD's.

The server option may cost a bit more then the $500 for the Adaptec Raid
controller.  This will only work if Linux or Windows 2003 supports my much
needed requirements.  My Linux OS will be installed on a 40mb IDE Drive (not
part of the Array).  

The options I seek are to be able to start with a 6 Drive array RAID-5
array, then as my demand for more space increases in the future I want to be
able to plug in more drives and incorporate them into the Array without the
need to backup the data.  Basically I need the software to add the
drive/drives to the Array, then Rebuild the array incorporating the new
drives while preserving the data on the original array.

QUESTIONS
Since this is a media server, and would only be used to serve Movies and
Video to my two machines It wouldn't have to be powered up full time (My
Music consumes less space and will be contained on two seperate machines). 
Is there a way to considerably lower the power consumption of this server
the 90% of time its not in use?

Can Linux support Drive Arrays of Significant Sizes (4-8 terabytes)?

Can Linux Software support RAID-5 expandability, allowing me to increase the
number of disks in the array, without the need to backup the media, recreate
the array from scratch and then copy the backup to the machine (something I
will be unable to do)?

I know this is a Linux forum, but I figure many of you guys work with
Windows Server.  If so does Windows 2003 provide the same support for the
requested requirements above?

Thanks
GreenJelly
-- 
View this message in context: http://www.nabble.com/Software-based-SATA-RAID-5-expandable-arrays--tf3937421.html#a11167521
Sent from the linux-raid mailing list archive at Nabble.com.

-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html






____________________________________________________________________________________
We won't tell. Get more on shows you hate to love 
(and love to hate): Yahoo! TV's Guilty Pleasures list.
http://tv.yahoo.com/collections/265
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


      ___________________________________________________________
Yahoo! Answers - Got a question? Someone out there knows the answer. Try it
now.
http://uk.answers.yahoo.com/
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux