How to deal with a node losing disks in HA-LVM

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Using HA-LVM (with LVM tags), if node1 loses access to the disks, it
obviously can't strip the tags.
Other nodes will refuse to recover because the tags are there, and node1
is still online.
If I fence node1, others will happily take over, because they see node1
is offline and they can safely strip the tags.
I've got self_fence=on on the resources, but it's unclear to me in what
conditions it will be triggered on. Apparently not this one (access to
disks lost on the active node for example by HBA problem or zoning
error).

How can I ensure the self_fence is triggered? Or any other better ideas?

Here's the relevant bits from cluster.conf:
<resources>
  <lvm name="res_sanvg" self_fence="on" vg_name="sanvg"/>
  <fs device="/dev/sanvg/sanlv" fsid="29088" mountpoint="/var/lib/mysql"
  name="fs_sanlv" self_fence="on"/>
  <mysql config_file="/etc/my.cnf" name="res_mysql" shutdown_wait="5"
  startup_wait="5"/>
</resources>
<service domain="DC0" name="srv_mysql" recovery="relocate">
  <lvm ref="res_sanvg">
    <fs ref="fs_sanlv">
      <mysql ref="res_mysql"/>
    </fs>
  </lvm>
</service>

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster


[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux