Joseph L. Casale wrote:
Hi Gordan, I'm under the gun to get something operational asap and have been reading like mad but given the severity of getting it wrong, I want some experience under my belt before I step off the reservation. I don't have the infrastructure available to export an iscsi device, it's the backend I need redundant just the way drbd would provide.
You can do that with iSCSI. Make a sparse file to represent the exported storage and export that via iSCSI. Do the same on both nodes. Then from both nodes connect to it, get CLVM on top and get it to mirror the two exported volumes. Put GFS on top. Nowhere nearly as clean and nice as DRBD, though, especially cometh time to recover from failure.
I am ok with drbd, but its integration with rhcs concerns me as I don't really know how the init scripts get constructed. I saw an archive post about the wrapper for /usr/share/cluster/drbd.sh to hand it the resource name. I can assume if a node tanks, drbd goes primary on the remaining node, everything is good.
Or you can go active-active with GFS. :)
Looking through drbd.sh I see case's for promote/demote, I added this (the wrapper actually) as a resource "script" with a child "fs" in a mock cluster as the underlying device was primary on that node, it just started:) How does rhcs know when adding a "script" to pass the required cases? How do I configure this?
I use it in active-active mode with GFS. In that case I just use the fencing agent in DRBD's "stonith" configuration so that when disconnection occurs, the failed node gets fenced.
Gordan -- Linux-cluster mailing list Linux-cluster@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/linux-cluster