Re: Restarting GFS2 without reboot

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Nov 26, 2013 at 12:17:14PM +0000, Steven Whitehouse wrote:
> > Okay, one node has gone down, but why the second node can't keep working with
> > the filesystem? :-( That's what surprises and scares me at the same time.
> Well the second node will need to ensure quorum, so you should have a
> two node set up configured. That will require some kind of tie-breaker
> so I'm guessing that you are using qdisk for that? This is why it would
> help if you posted your config, as otherwise I'm left guessing,

I'm using corosync instead of qdiskd. And here's my cluser.conf, it
looks really simple:

<?xml version="1.0"?>
<cluster name="ckvm1_pod1" config_version="5">
    <clusternodes>
    <clusternode name="***.host1.ckvm1.***" votes="1" nodeid="1">
      <fence>
        <method name="single">
          <device name="host1_ipmi"/>
        </method>
      </fence>
    </clusternode>
    <clusternode name="***.host2.ckvm1.***" votes="1" nodeid="2">
      <fence>
        <method name="single">
          <device name="host2_ipmi"/>
        </method>
      </fence>
    </clusternode>
  </clusternodes>
  <fencedevices>
    <fencedevice name="host1_ipmi" agent="fence_ipmilan" ipaddr="***" login="***" passwd="***"/>
    <fencedevice name="host2_ipmi" agent="fence_ipmilan" ipaddr="***" login="***" passwd="***"/>
  </fencedevices>
  <rm>
    <failoverdomains/>
    <resources/>
  </rm>
</cluster>


-- 
V.Melnik

-- 
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster




[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux