Storage Cluster Newbie Questions - any help with answers greatly appreciated!

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hail Linux Cluster gurus,

I have researched myself into a corner and am looking for advice. I've never been a "clustered storage guy", so I apologize for the potentially naive set of questions. ( I am savvy on most other aspects of networks, hardware, OS's etc... but not storage systems).

I've been handed ( 2 ) x86-64 boxes w/2 local disks each; and ( 2 ) FC-AL disk shelves w/14 disks each; and told to make a mini NAS/SAN (NFS required, GFS optional). If I can get this working reliably then there appear to be about another ( 10 ) FC-AL shelves and a couple of Fiber Switches laying around that will be handed to me.

NFS filesystems will be mounted by several (less than 6) linux machines, and a few (less than 4) windows machines [[ microsoft nfs client ]] - all more or less doing web server type activities (so lots of reads from a shared filesystem - log files not on NFS so no issue with high IO writes). I'm locked into NFS v3 for various reasons. Optionally the linux machines can be clustered and GFS'd instead - but I would still need to come up with a solution for the windows machines - so a NAS solution is still required even if I do GFS to the linux boxes.

Active / Passive on the NFS is fine.

* Each of the ( 2 ) x86-64 machines have a Qlogic dual HBA 1 fiber direct connected to each shelf (no fiber switches yet - but will have them later if I can make this all work); I've loaded RHEL 5.4 x86-64.

* Each of the ( 2 ) RHEL 5.4 boxes - used the 2 local disks w/onboard fake raid1 = /dev/sda - basic install so /boot and LVM for the rest - nothing special here (didn't do mdadm basically for simplicity of /dev/sda)

* Each of the ( 2 ) RHEL 5.4 boxes can see all the disks on both shelves - and since I don't have Fiber Switches yet - at the moment there is only 1 path to each disk; however as I assume I will figure out a method to make this work - I have enabled multipath - and therefore I have consistent names to 28 disks.

Here's my dilemma. How do I best add Redundancy to the Disks, removing as many single points of failure, and preserving as much diskspace as possible?

My initial thought was - to take "shelf1:disk1 and shelf2:disk1" and put them into a software raid1 - mdadm; then put the resulting /dev/md0 into a LVM. When I need more diskspace, I just then create "shelf1:disk2 and shelf2:disk2" as another software raid1 then just add the new "/dev/md1" into the LVM and expand the FS. This handles a couple things in my mind:

1. Each shelf is really a FC-AL so it's possible that a single disk going nuts could flood the FC-AL and all the disks in that shelf go poof until the controller can figure itself out and/or the bad disk is removed.

2. Efficient I am retaining 50% storage capacity after redundancy - if I can do the "shelf1:disk1 + shelf2:disk2" mirrors; plus all bandwidth used is spread across the 2 HBA fibers and nothing goes over the TCP network. Conversely DRBD doesn't excite me much - as I then have to do both raid in the shelf (probably still with MDADM) and then I add TCP (ethernet) based RAID1 between the nodes - and when all is said and done - I only the have 25% of storage capacity still available after redundancy.

3. I easy to add more diskspace - as each new mirror (software raid1) can just be added to an existing LVM.

From what I can find messing with Luci (Conga) though... is - I don't see any resource scripts listed for - "mdadm" (on RHEL 5.4) - so would my idea even work (I have found some posts asking for a mdadm resource script but I've seen no response)? I also see with RHEL 5.3 LVM has mirrors that can be clustered now - is this the right answer? I've done a ton of reading but everything I've dug up so far; assumes that the fiber devices are being presented by a SAN that is doing the redundancy before the RHEL box sees the disk... or... there are a ton of examples of where fiber is not in the picture and there are a bunch of locally attached hosts presenting storage onto the TCP (ethernet) - but I've not found nearly anything on my situation...

So... here I am... :-) I really just have 2 nodes - who can both see - a bunch of disks (JBOD) and I want to present them to multiple hosts via NFS (required) or GFS (to linux boxes only).

All ideas - are greatly appreciated!

-Michael

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux