RE: GFS and iscsi problem

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi all, I would like to know that too, since I made some similar tests and GFS seems simply to hang.



My config:
# cat /etc/cluster/cluster.conf
<?xml version="1.0"?>
<cluster name="grappesgsge" config_version="1" ipaddr="192.168.1.20">

  <cman expected_votes="1">
  </cman>

  <clusternodes>
    <clusternode name="TORQUE1">
      <fence>
        <method name="human">
          <device name="human" ipaddr="TORQUE1"/>
        </method>
      </fence>
    </clusternode>
  </clusternodes>

  <clusternodes>
    <clusternode name="TORQUE2">
      <fence>
        <method name="human">
          <device name="human" ipaddr="TORQUE2"/>
        </method>
      </fence>
    </clusternode>
  </clusternodes>

  <clusternodes>
    <clusternode name="TORQUE3">
      <fence>
        <method name="human">
          <device name="human" ipaddr="TORQUE3"/>
        </method>
      </fence>
    </clusternode>
  </clusternodes>

  <fence_devices>
    <device name="human" agent="fence_manual"/>
  </fence_devices>

</cluster>


Alexandre Racine
Projets spéciaux
514-461-1300 poste 3304
alexandre.racine@xxxxxxxxx



-----Original Message-----
From: linux-cluster-bounces@xxxxxxxxxx on behalf of Pawel Mastalerz
Sent: Tue 2007-09-04 07:41
To: linux-cluster@xxxxxxxxxx
Subject:  GFS and iscsi problem
 
Hi,

I have some problem with gfs cluster and iscsi VTrak M500i.

Cluster structure looks like that:each of 14 nodes is connected to 
vtrack and have sdb7 disc mounted with GFS.Right now 6 machines are 
using thath disc to read&write images.Those 6 machines, on which is that 
site stored, are plugged to LB. Scheme looks like that:

			*iscsi*
			  | |
			<switch>
		  |	|     |	    |	
		node1 node2 node3 node4... etc


config:

<?xml version="1.0"?>
<cluster name="webnews" config_version="1">.
          <clusternodes>
            <clusternode name="www1" votes="1">
              <fence>
                <method name="1">
                  <device name="blade" ipaddr="192.168.3.42" blade="1"/>
                </method>
              </fence>
            </clusternode>
            <clusternode name="www2" votes="1">
              <fence>
                <method name="1">
                  <device name="blade" ipaddr="192.168.3.43" blade="2"/>
                </method>
              </fence>
            </clusternode>

(...)

 From time to time there is a problem on one of those nodes with loosing 
connection to iscsi, when that happens the whole GFS is blocked and the 
rest of the nodes has no access to that partition (sdb7) :(
Question - Why is GFS blocking access to that directory for all nodes if 
on the node (which cause problems) connection to SCSI has been recovered?
I suppose that is GFS fault, but why logs dont show that? The only thing 
i can do now is to reload cluster and GFS.

-- 
Pawel Mastalerz
pawel[dot]mastalerz[at]mainseek[dot]com
http://mainseek.net/

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

<<winmail.dat>>

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux