Prevent locked I/O in two-node OCFS2 cluster? (DRBD 8.3.8 / Ubuntu 10.10)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Title: Prevent locked I/O in two-node OCFS2 cluster? (DRBD 8.3.8 / Ubuntu 10.10)
Hello,

I am posting here as a recommendation from an
ocfs2-users response for my original post: http://oss.oracle.com/pipermail/ocfs2-users/2011-April/005046.html

Excerpt:
  
I am running a two-node web cluster on OCFS2 via DRBD Primary/Primary (v8.3.8) and Pacemaker. Everything  seems to be working great, except during testing of hard-boot scenarios.

Whenever I hard-boot one of the nodes, the other node is successfully fenced and marked “Outdated”

* <resource minor="0" cs="WFConnection" ro1="Primary" ro2="Unknown"ds1="UpToDate" ds2="Outdated" />

However, this locks up I/O on the still active node and prevents any operations within the cluster :( I have even forced DRBD into StandAlone mode while in this state, but that does not resolve the I/O lock either.

...does anyone know if this is possible using OCFS2 (maintaining an active cluster in Primary/Primary when the other node has a failure? E.g. Be it forced, controlled, etc) Is “qdisk” a requirement for this to work with Pacemaker?

NOTE: On a reply to my original post (URL above) I also provided an example CIB that I have been using during testing.

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux