Re: HW RAID configuration - best practise question(s)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I would agree; SATA is more than just jumped up IDE but SCSI is the 
preferred choice if you can get the cash. I am going to get my suppler 
to put a SCSI option on paper and I will try and make a case for getting 
SCSI. In my size business, it might well be the case that this server 
may have to do a stint as a fall-over for the DB server. In this scenario 
SCSI would be best.

I do like the idea of having a separate OS disk. Upgrading the OS can 
be a real pain and I thought that if I kept the OS disk elsewhere I could 
destroy it and start again without too much trouble. I also like the idea 
of having a spare disk on hand in case of trouble ..etc. I guess another 
layout would be to have a mirrored pair (RAID 1) of the OS volume and 
the rest given over to data (RAID 5). 


On 8 Feb 2005 at 10:49, urgrue wrote:

> > SATA is for game computers and highend workstations.  Use SCSI for
> > servers and hardware RAID not software RAID.  IBM has 20KRPM SCSI 
> > drives now and with the Ultra160 wide channels, data flow just 
> > screams.
> 
> I don't quite agree. SATA is excellent and significantly more 
> affordable than SCSI. I would not recommend normal PATA IDE for 
> anything, SATA for almost everything, and SCSI only for very high-end 
> situations where money is not a concern. For the vast majority of RAID 
> scenarios I would recommend SATA.
> I'm very wary of software RAID, although I have used it in a few 
> scenarios and it does do the job. But if nothing else, its much easier 
> on the linux side if its a hardware solution, as linux will just see it 
> as a single disk.
> 
> > 4 drives with a RAID 5 over three drives with one hot-spare is a very
> > efficient configuration.
> 
> Yes, it is. One thing to keep in mind is to make sure you have a good 
> system set up to send you an alert when a drive fails, though. I had 
> one RAID array that due to configuration errors was unable to get its 
> alarm mail through when a drive failed. Eventually a second drive 
> failed, at which point we noticed it. Personally I got for RAID-10, 
> just to be on the safe side. Drives are so cheap these days that I 
> prefer to pay a little extra and gain that little bit of extra safety...
> 
> > > My own thoughts were to keep the root file system outside of the
> > > RAID.
> > 
> > That is not necessary.  Your hardware RAID arrays will look like
> > individual drives to your software, treat them as such when you
> > partition
> 
> No it's not necessary. Personally however I do prefer to keep the OS on 
> its own disk. This makes it so much easier to fix OS software problems. 
> You can have an extra copy of the OS disk ready, so that in case of 
> software failure you can swap the backup right in and be up and running 
> in minutes instead of having to go through some rather more complicated 
> process of restoring an OS to an existing RAID array.
> It also makes patches/upgrades much easier, as you can apply them to 
> the backup disk, swap, and see if everything is OK, and just swap back 
> if not.
> It's all just one step more complicated if the OS is on the RAID array.
> 
> > Why not.  If your hard drive with SWAP on it goes down, wouldn't you
> > like it to be as safe as the rest of the server?
> 
> I keep swap, along with everything else OS-related, on the OS disk. The 
> RAID array I use just for data.
> All in all, it's a matter of preference and depends on how you set up 
> your own systems, I wouldnt say there is any one correct answer.
> 
> > Test it!: load up an OS, copy some large pictures to it, large
> > documents, and some third party software or something you can test.  
> > Pull out a drive, test the pics, docs, and software to make sure they 
> > will work while the drive is off-line, while the drive is being 
> > rebuilt and once the system if finished rebuilding.  Test as much as 
> > you can with your new RAID before you trust it to a live, production 
> > server.  If you insist on playing about with a SW-RAID, break it and 
> > make sure you can reboot LOL.
> 
> I cant agree more. After you install a RAID, TEST it every way you can. 
> It's a nightmare situation if you realize youve lost all your data 
> because of some misconfiguration or because something didnt work the 
> way you thought it was supposed to. 
> 
> urgrue
> -
> : send the line "unsubscribe linux-admin" in
> the body of a message to majordomo@xxxxxxxxxxxxxxx
> More majordomo info at  http://vger.kernel.org/majordomo-info.html


~~
Dermot Paikkos * dermot@xxxxxxxxxxxxxxxx
Network Administrator @ Science Photo Library
Phone: 0207 432 1100 * Fax: 0207 286 8668

-
: send the line "unsubscribe linux-admin" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux Newbie]     [Audio]     [Hams]     [Kernel Newbies]     [Util Linux NG]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux RAID]     [Linux Device Drivers]     [Samba]     [Video 4 Linux]     [Git]     [Fedora Users]

  Powered by Linux