GFS, iSCSI, multipaths and RAID

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi everyone,

I have set up a small experimental network with a linux cluster and SAN that I want to have high data availability. There are 2 servers that I have put into a cluster using conga (thank you luci and ricci). There are 2 storage devices, each consisting of a basic server with 2 x 1TB disks. The cluster servers and the storage devices each have 2 NICs and are connected using 2 gigabit ethernet switches.

I have created a single striped logical volume on each storage device using the 2 disks (to try and speed up I/O on the volume). These volumes (one on each storage device) are presented to the cluster servers using iSCSI (on the cluster servers) and iSCSI target (on the storage devices). Since there are multiple NICs on the storage devices I have set up two iSCSI portals to each logical volume. I have then used mdadm to ensure the volumes are accessible via multipath.

Finally, since I want the storage devices to present the data in a highly available way I have used mdadm to create a software raid-5 across the two multipathed volumes (I realise this is essentially mirroring on the 2 storage devices but I am trying to set this up to be extensible to extra storage devices). My next step is to present the raid array (of the two multipathed volumes - one on each storage device) as a GFS to the cluster servers to ensure that locking of access to the data is handled properly.

I have recently read that multipathing is possible within GFS, but raid is not (yet). Since I want the two storage devices in a raid-5 array and I am using iSCSI I'm not sure if I should try and use GFS to do the multipathing. Also, being a linux/storage/clustering newbie I'm not sure if my approach is the best thing to do. I want to make sure that my system has no single point of failure that will make any of the data inaccessible. I'm pretty sure our network design supports this. I assume (if I configure it right) the cluster will ensure services will keep going if one of the cluster servers goes down. Thus the only weak point was the storage devices which I hope I have now strengthened by essentially implementing network raid across iSCSI and then presented as a single GFS.

I would really appreciate comments/advice/constructive criticism as I have really been learning much of this as I go.

Cheers, Mike

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux