Duplicate Volumes, GFS fail over

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



One of the reasons I went with fibre channel was because of the redundancy it 
can give me as well as the centralized storage.
There is one aspect of this I've not found any solutions to yet however in 
dealing with GFS, redundant paths.

As you can see, I have 4 storage devices, three of which have dual paths in 
case of failure. The problem is that the system thinks this is duplicate and 
possibly an error as it sees it, I'm not sure. 

I thought I would ask before trying to put the dual paths to use. Here is an 
output;

  Found duplicate PV TB3VUn1m3CBOj8dRnRjAbRuD3ZIyBp7t: using /dev/sde1 not 
/dev/sda1
  Found duplicate PV uXjYfhj4NlvQphf1DIz28psAAkFmJRaM: using /dev/sdf1 not 
/dev/sdb1
  Found duplicate PV 3mlyNROBtWp4a3LXQEKBiCwk3pJear7u: using /dev/sdg1 not 
/dev/sdc1
  ACTIVE            '/dev/VolGroup01/rimfire' [572.72 GB] inherit
  ACTIVE            '/dev/VolGroup04/web' [318.85 GB] inherit
  ACTIVE            '/dev/VolGroup03/qm' [745.76 GB] inherit
  ACTIVE            '/dev/VolGroup02/sql' [745.76 GB] inherit

I don't have a dual path on the VolGroup01 device because it's just manual 
storage I use here and then.

The question is; How should I deal with the dual paths and how do I set up 
volumes/GFS to be fault tolerant in this respect. If one path fails, I'll want 
GFS to fail over to the second path to the same storage.

Mike



--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux