Re: CLVM in a 3-node cluster

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Please start a new thread, with a new subject, and include your
cluster.conf file please.

Digimer

On 07/11/2012 05:57 AM, AKIN ÿffffffffffd6ZTOPUZ wrote:
> Hi
>  
> I have 2-nodes cluster without quorum disks.ı noticed a problem at below:
>  
>  
> when I want to move resources to other node it is failed   to relocate
> services to other node and again services   run the orginal node.
>  
> but when I want to restart node it is ok
>  
> could you have any ideas?
> 
> *From:* Fabio M. Di Nitto <fdinitto@xxxxxxxxxx>
> *To:* linux-cluster@xxxxxxxxxx
> *Sent:* Tuesday, July 3, 2012 7:04 AM
> *Subject:* Re:  CLVM in a 3-node cluster
> 
> On 07/02/2012 11:39 PM, urgrue wrote:
>> On 2/7/12 19:14, Digimer wrote:
>>> On 07/02/2012 01:08 PM, urgrue wrote:
>>>> I'm trying to set up a 3-node cluster with clvm. Problem is, one node
>>>> can't access the storage, and I'm getting:
>>>> Error locking on node node3: Volume group for uuid not found: <snip>
>>>> whenever I try to activate the LVs on one of the working nodes.
>>>>
>>>> This can't be "by design", can it?
>>>
>>> Does pvscan show the right device? Are all nodes in the cluster? What
>>> does 'cman_tool status' and 'dlm_tool ls' show?
>>>
>>
>> Sorry, I realize now I was misleading, let me clarify:
>> The third node cannot access the storage, this is by design. I have
>> three datacenters but only two have access to the active storage. The
>> third datacenter only has an async copy, and will only activate
>> (manually) in case of a massive disaster (failure of both the other
>> datacenters).
>> So I deliberately have a failover domain with only node1 and node2.
>> node3's function is to provide quorum, but also be able to be activated
>> (manually is fine) in case of a massive disaster.
>> In other words node3 is part of the cluster, but it can't see the
>> storage during normal operation.
>> Looking at it another way, it's kind of as if we had a 3-node cluster
>> where one node had an HBA failure but is otherwise working. Surely node1
>> and node2 should be able to continue running the services?
>> So my question is, do I have an error somehwere, or is clvm really
>> actually not able to function without all nodes being active and able to
>> access storage?
> 
> CLVM requires a consistent view of the storage from all nodes in the
> cluster. This is by design.
> 
> A storage failure during operations (aka you start with all nodes able
> to access the storage and then downgrade) is handle correctly.
> 
> Fabio
> 
> --
> Linux-cluster mailing list
> Linux-cluster@xxxxxxxxxx <mailto:Linux-cluster@xxxxxxxxxx>
> https://www.redhat.com/mailman/listinfo/linux-cluster
> 
> 
> 
> 
> --
> Linux-cluster mailing list
> Linux-cluster@xxxxxxxxxx
> https://www.redhat.com/mailman/listinfo/linux-cluster
> 


-- 
Digimer
Papers and Projects: https://alteeve.com


--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster



[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux