how to configure qdisk in a two nodes cluster with mirrored LVM

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello all,

Situation:
We have a 2 nodes cluster (we don't use GFS). Only one node has an
active service.
The other node is only here in case the first node crashs (application
automatically restarts on the healthy node).

This service has a file system resource that is a mirrored LV across 2
storage bays (HSV210 - HP EVA8000 : let's call them SAN1 and SAN2).
We also have a quorum disk that is declared on SAN1.
Today, we tested the cluster behaviour in case of a SAN2 outage (We
did so by deactivating zoning between nodes and SAN2 controllers).
Immediatly, ths IO/s on the mirrored LV are stopped. After 2 or 3
minutes, the mirrored LV becomes linear and I/Os resume on the
available storage bay :

Sep 25 12:02:01 redhat lvm[15525]: Mirror device, 253:7, has failed.
Sep 25 12:02:01 redhat lvm[15525]: Device failure in vghpdriver-lvhpdriver
Sep 25 12:02:22 redhat lvm[15525]: WARNING: Bad device removed from
mirror volume, vghpdriver/lvhpdriver
Sep 25 12:02:22 redhat kernel: end_request: I/O error, dev sda, sector 5559920
Sep 25 12:02:22 redhat lvm[15525]: WARNING: Mirror volume,
vghpdriver/lvhpdriver converted to linear due to device failure.

This is pretty much what we hoped for.
But, when we do the same test on SAN1 (the one with qdisk), the
cluster instantly becomes inquorate and stops working.
Here is our qdisk configuration :

  <quorumd interval="3" tko="14" votes="2"
device="/dev/vgquorum/lvquorum" label="quorum_hpdr" log_level="7">
    <heuristic program="/etc/cluster/euristic.sh" score="1" interval="5"/>
  </quorumd>

Here is the ouput of 'cman_tool status'

# cman_tool status
Protocol version: 5.0.1
Config version: 2
Cluster name: clu_HPDRIVER
Cluster ID: 23324
Cluster Member: Yes
Membership state: Cluster-Member
Nodes: 2
Expected_votes: 2
Total_votes: 3
Quorum: 2
Active subsystems: 5
Node name: redhat.test.com
Node ID: 2
Node addresses: 192.168.1.6

I tried to replace votes="2" by votes="1" in
/etc/cluster/cluster.conf... which solved the problem.
But is it safe to do this?

Thanks

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux