Re: GFS, iSCSI, multipaths and RAID

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Michael O'Sullivan wrote:
Hi everyone,

I have set up a small experimental network with a linux cluster and SAN that I want to have high data availability. There are 2 servers that I have put into a cluster using conga (thank you luci and ricci). There are 2 storage devices, each consisting of a basic server with 2 x 1TB disks. The cluster servers and the storage devices each have 2 NICs and are connected using 2 gigabit ethernet switches.

It is a little bit hard to figure out the exact configuration based on this description (a diagram would help if you can). In general, I don't think GFS tuned well with iscsi, particularly the latency could spike if DLM traffic gets mingled with file data traffic, regardless your network bandwidth. However, I don't have enough data to support the speculation. It is also very application dependent. One key question is what kind of GFS applications you plan to dispatch in this environment ?

I see you have a SAN here .. Any reason to choose iscsi over FC ?


I have created a single striped logical volume on each storage device using the 2 disks (to try and speed up I/O on the volume). These volumes (one on each storage device) are presented to the cluster servers using iSCSI (on the cluster servers) and iSCSI target (on the storage devices). Since there are multiple NICs on the storage devices I have set up two iSCSI portals to each logical volume. I have then used mdadm to ensure the volumes are accessible via multipath.

The iscsi target function is carried out by the storage device (firmware) or you use Linux's iscsi target ?

Finally, since I want the storage devices to present the data in a highly available way I have used mdadm to create a software raid-5 across the two multipathed volumes (I realise this is essentially mirroring on the 2 storage devices but I am trying to set this up to be extensible to extra storage devices). My next step is to present the raid array (of the two multipathed volumes - one on each storage device) as a GFS to the cluster servers to ensure that locking of access to the data is handled properly.

So you're going to have CLVM built on top of software RAID ? .. This looks cumbersome. Again, a diagram could help people understand more.

-- Wendy

I have recently read that multipathing is possible within GFS, but raid is not (yet). Since I want the two storage devices in a raid-5 array and I am using iSCSI I'm not sure if I should try and use GFS to do the multipathing. Also, being a linux/storage/clustering newbie I'm not sure if my approach is the best thing to do. I want to make sure that my system has no single point of failure that will make any of the data inaccessible. I'm pretty sure our network design supports this. I assume (if I configure it right) the cluster will ensure services will keep going if one of the cluster servers goes down. Thus the only weak point was the storage devices which I hope I have now strengthened by essentially implementing network raid across iSCSI and then presented as a single GFS.

I would really appreciate comments/advice/constructive criticism as I have really been learning much of this as I go.



--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux