Software RAID setup ideas and an apology.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]



The apology first.  If I've offended anyone, or just annoyed anyone, I 
apologize for that.  I sometimes can go 'off the deep end' when I'm being 
sarcastic (an old habit from my news.admin days when I ran a Usenet site in 
the early 90's), and I'll try to not go into news.admin mode here in the 
future.

Now, the question.  I have a few ideas about how to do this, but am not sure 
of the ideal way of doing it.  It has to be a pretty common thing to do, so 
I'll ask it here.

I have a server (Dell PE 1600SC) with two SCSI host adapters: a U320 LSI/MPT 
for internal drives only, and an Adaptec 39160 with two internal and two 
external U160 channels.  I have set up a software RAID config with LVM2 on 
top (CentOS 4) with four 36GB 15K RPM Ultra 3 drives.  The internal drives 
are set up with /dev/sda and /dev/sdb partitioned with 100MB /boot (for sda) 
and 100MB /boot2 (backup on sdb), and the rest of each drive as mirrors on a 
RAID1.  /dev/sdc and sdd have each a single RAID partition and are likewise 
mirrors in a RAID1.  So I have /dev/md0 and /dev/md1, which are then two 
parts of a physical volume with the various logical volumes under that.  This 
works and performs well, all things considered.

Now, the 39160 HBA is attached through one of its two channels to a donated 
Cisco Storage Array 12 with 12 18GB drives in it.  Unfortunately, the second 
channel of the SA12 isn't working properly (it automatically configures split 
bus for the array if the second channel is activated), so I'm limited to a 
single channel with 12 drives on it.  I have the hardware portion of this 
working, all drives are recognized, etc both at the Adaptec BIOS level and at 
the Linux kernel level (/dev/sde through /dev/sdp).

My question is, what is the opinion of those on the list with experience in 
this area as to the 'best' method of joining these drives.  I would like to 
be able to lose two or more drives and still have redundancy; I had thought 
about pairing drives in RAID1's and LVM over the array; and I had thought 
about a big RAID5.  But I don't want to shoot performance through the foot, 
either.  I don't have a good hardware RAID controller for this box, so that's 
not at this point an option (I have some hardware RAID controllers, but the 
ones that work ok with this array have much slower SCSI busses, and the boot 
up behavior of this particular array causes the good RAID controllers I do 
have with Ultra 3 interfaces to go crazy (the array powers up when it 
receives signal from the host adapter: not each drive, but the WHOLE ARRAY 
powers up on command; the DPT controllers I have simply refuse to recognize 
any drives at all on the bus, regardless of spinup delay (I set it to 60 
seconds and still no joy; the 39160 recognizes all the drives just fine))). 

Maybe one of those slower controllers might be better than software RAID1; but 
at this point I'm open to suggestions.
-- 
Lamar Owen
Director of Information Technology
Pisgah Astronomical Research Institute
1 PARI Drive
Rosman, NC  28772
(828)862-5554
www.pari.edu

[Index of Archives]     [CentOS]     [CentOS Announce]     [CentOS Development]     [CentOS ARM Devel]     [CentOS Docs]     [CentOS Virtualization]     [Carrier Grade Linux]     [Linux Media]     [Asterisk]     [DCCP]     [Netdev]     [Xorg]     [Linux USB]
  Powered by Linux