Re: GFS network and fencing questions

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thomas Suiter wrote:

I’m going to be building a 6 node cluster with blade servers that only have 2x network connections attached to EMC DMX storage. The application we are running has it's own cluster layer so we won't be using the failover services (they just want the filesystem to be visible to all nodes). Each node should be reading/writing only in it's own directory with a single filesystem size ~15TB.


Are You going to connect to DMX via FC or iSCSI?

Questions I have are this:

1) The documentation is unclear as to this, I'm assuming that I should I bond the 2x interfaces rather than have one interface for public and one for private. I'm thinking this will make the system much more available in general, but I don't know if the public/private is a hard requirement (or if what I'm thinking is even better) Best case would be to get 2x more but unfortunately I don't have that luxury. If this is preferred, would I need to use 2x ip addresses in this configuration, or can I use just the 1x per node.

Bonding allows You to achieve High Availability and with VLAN on it You could have public/private interfaces. You could also use on interface for public and the second for private.

2) I have the capabilities to support scsi3 reservations inside the DMX, should I be using scsi3 instead of power based fencing (or both). It seems like a relatively option, is it ready for use or should it bake a bit longer. I've used Veritas VCS with scsi3 previously and it was sometimes semi-annoying. But the reality is that availability and data protection is more important than not being annoyed.

If You don't use multipath it should works. But if You have multipath environment then You should check if it is supported(there wasn't some time agoe).

3) Since I have more than 2x nodes should I use qdiskd or not (or is it even needed in this type of configuration with no failover); looking around it appears that it’s caused some problems in the past.



Qdiskd is good option and You should use it if You can. It is like as(and more) another independent Etherenet interface.

Best Regards
Maciej Bogucki

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux