Re: [clvm] Volume group for uuid not found

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Abraham Alawi wrote:
On 2/02/2010, at 11:31 AM, Gordan Bobic wrote:

Abraham Alawi wrote:
On 2/02/2010, at 5:21 AM, Gordan Bobic wrote:
AlannY wrote:
On Mon, Feb 01, 2010 at 03:34:18PM +0000, Christine Caulfield wrote:
Is it really two different devices ? If so then clvmd will not work.
It needs the same view of storage on all systems. Not necessarily
the same names, but definitely the same actual storage.
So, should I use DRBD with clvm for data mirroring on both nodes?
You could, indeed, use DRBD. Or you could cross-export the block devices with iSCSI (or ATAoE) and have them connected to all the nodes that way. I guess it depends on whether you prefer to use DRBD or CLVM. My preference is for DRBD, but don't let that bias your decision. :)

Gordan

I reckon GNBD will be a better solution in your case, DRBD is more
suitable for non-cluster file systems (e.g. ext3, xfs .. ) in case
of active-passive
DRBD is specifically designed to also work in active-active mode. I've been running shared root GFS clusters on DRBD for years. DRBD is singularly _THE_ best solution for network RAID1, _especially_ in active-active mode with a clustered file system on top. It also handles resyncing after outages much more gracefully and transparently than other similar solutions.

Gordan


Yes, it does work in active-active but DRBD people themselves don't
recommend running it in production active-active under cluster file
system, I quote from their website:
"DRBD's primary-primary mode with a shared disk file system (GFS,
OCFS2). These systems are very sensitive to failures of the
replication network. Currently we cannot generally recommend this
for production use."
http://www.drbd.org/home/mirroring/

That surprises me - it could just be an out of date page. I've used it in active-active mode with GFS on top in all sorts of harsh and abusive edge-case ways and never saw it skip a beat.

In terms of production solution I reckon GNBD is designed more
specifically for that purpose.

Not really. GNBD is really paper thin and quite dumb. It doesn't actually have any feature overlap with DRBD. It's more akin to iSCSI, in the sense that it is for exporting a block device, not mirroring a block device. In other words it's a 1->many export feature. It won't provide mirroring on it's own. Features like mirroring and post-outage resync have to be taken higher by "some else". And these alternatives handle failures nowhere nearly as gracefully as DRBD. For example, if a DRBD mirror fails (failed disk), all access will get transparently redirected to the surviving mirror. If a node disconnects, upon reconnection it will sync on the blocks that changed since it was last connected, and do so transparently, as you would expect.

Gordan

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux