SATA is for game computers and highend workstations. Use SCSI for
servers and hardware RAID not software RAID. IBM has 20KRPM SCSI drives now and with the Ultra160 wide channels, data flow just screams.
I don't quite agree. SATA is excellent and significantly more affordable than SCSI. I would not recommend normal PATA IDE for anything, SATA for almost everything, and SCSI only for very high-end situations where money is not a concern. For the vast majority of RAID scenarios I would recommend SATA.
I'm very wary of software RAID, although I have used it in a few scenarios and it does do the job. But if nothing else, its much easier on the linux side if its a hardware solution, as linux will just see it as a single disk.
4 drives with a RAID 5 over three drives with one hot-spare is a very efficient configuration.
Yes, it is. One thing to keep in mind is to make sure you have a good system set up to send you an alert when a drive fails, though. I had one RAID array that due to configuration errors was unable to get its alarm mail through when a drive failed. Eventually a second drive failed, at which point we noticed it. Personally I got for RAID-10, just to be on the safe side. Drives are so cheap these days that I prefer to pay a little extra and gain that little bit of extra safety...
> My own thoughts were to keep the root file system outside of the > RAID.
That is not necessary. Your hardware RAID arrays will look like individual drives to your software, treat them as such when you partition
No it's not necessary. Personally however I do prefer to keep the OS on its own disk. This makes it so much easier to fix OS software problems. You can have an extra copy of the OS disk ready, so that in case of software failure you can swap the backup right in and be up and running in minutes instead of having to go through some rather more complicated process of restoring an OS to an existing RAID array.
It also makes patches/upgrades much easier, as you can apply them to the backup disk, swap, and see if everything is OK, and just swap back if not.
It's all just one step more complicated if the OS is on the RAID array.
Why not. If your hard drive with SWAP on it goes down, wouldn't you like it to be as safe as the rest of the server?
I keep swap, along with everything else OS-related, on the OS disk. The RAID array I use just for data.
All in all, it's a matter of preference and depends on how you set up your own systems, I wouldnt say there is any one correct answer.
Test it!: load up an OS, copy some large pictures to it, large
documents, and some third party software or something you can test. Pull out a drive, test the pics, docs, and software to make sure they will work while the drive is off-line, while the drive is being rebuilt and once the system if finished rebuilding. Test as much as you can with your new RAID before you trust it to a live, production server. If you insist on playing about with a SW-RAID, break it and make sure you can reboot LOL.
I cant agree more. After you install a RAID, TEST it every way you can. It's a nightmare situation if you realize youve lost all your data because of some misconfiguration or because something didnt work the way you thought it was supposed to.
urgrue - : send the line "unsubscribe linux-admin" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html