Re: openais issue

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



It seems to me that I am having issue related to rgmanager.

Here is what I did:

shutdown vm on all nodes using clusvcamd -s vm:guest1

Now if I restart rgmanager in only one any node. It restarts and but
also starts the same vm in all nodes. Also I have disabled the
autostart to 0 in my cluster.conf file just to debug.


Thanks
Paras.

On Fri, Sep 25, 2009 at 5:24 PM, Paras pradhan <pradhanparas@xxxxxxxxx> wrote:
> No I am not manually starting not using automatic init scripts.
>
> I started the vm using: clusvcadm -e vm:guest1
>
> I have just stopped using clusvcadm -s vm:guest1. For few seconds it
> says guest1 started . But after a while I can see the guest1 on all
> three nodes.
>
> clustat says:
>
>  Service Name                                            Owner (Last)
>                                          State
>  ------- ----                                            ----- ------
>                                          -----
>  vm:guest1                                               (none)
>                                          stopped
>
> But I can see the vm from xm li.
>
> This is what I can see from the log:
>
>
> Sep 25 17:19:01 cvtst1 clurgmgrd[4298]: <notice> start on vm "guest1"
> returned 1 (generic error)
> Sep 25 17:19:01 cvtst1 clurgmgrd[4298]: <warning> #68: Failed to start
> vm:guest1; return value: 1
> Sep 25 17:19:01 cvtst1 clurgmgrd[4298]: <notice> Stopping service vm:guest1
> Sep 25 17:19:02 cvtst1 clurgmgrd[4298]: <notice> Service vm:guest1 is
> recovering
> Sep 25 17:19:15 cvtst1 clurgmgrd[4298]: <notice> Recovering failed
> service vm:guest1
> Sep 25 17:19:16 cvtst1 clurgmgrd[4298]: <notice> start on vm "guest1"
> returned 1 (generic error)
> Sep 25 17:19:16 cvtst1 clurgmgrd[4298]: <warning> #68: Failed to start
> vm:guest1; return value: 1
> Sep 25 17:19:16 cvtst1 clurgmgrd[4298]: <notice> Stopping service vm:guest1
> Sep 25 17:19:17 cvtst1 clurgmgrd[4298]: <notice> Service vm:guest1 is
> recovering
>
>
> Paras.
>
> On Fri, Sep 25, 2009 at 5:07 PM, brem belguebli
> <brem.belguebli@xxxxxxxxx> wrote:
>> Have you started  your VM via rgmanager (clusvcadm -e vm:guest1) or
>> using xm commands out of cluster control  (or maybe a thru an
>> automatic init script ?)
>>
>> When clustered, you should never be starting services (manually or
>> thru automatic init script) out of cluster control
>>
>> The thing would be to stop your vm on all the nodes with the adequate
>> xm command (not using xen myself) and try to start it with clusvcadm.
>>
>> Then see if it is started on all nodes (send clustat output)
>>
>>
>>
>> 2009/9/25 Paras pradhan <pradhanparas@xxxxxxxxx>:
>>> Ok. Please see below. my vm is running on all nodes though clustat
>>> says it is stopped.
>>>
>>> --
>>> [root@cvtst1 ~]# clustat
>>> Cluster Status for test @ Fri Sep 25 16:52:34 2009
>>> Member Status: Quorate
>>>
>>>  Member Name                                                     ID   Status
>>>  ------ ----                                                     ---- ------
>>>  cvtst2                                                    1 Online, rgmanager
>>>  cvtst1                                                     2 Online,
>>> Local, rgmanager
>>>  cvtst3                                                     3 Online, rgmanager
>>>
>>>  Service Name                                            Owner (Last)
>>>                                          State
>>>  ------- ----                                            ----- ------
>>>                                          -----
>>>  vm:guest1                                               (none)
>>>                                          stopped
>>> [root@cvtst1 ~]#
>>>
>>>
>>> ---
>>> o/p of xm li on cvtst1
>>>
>>> --
>>> [root@cvtst1 ~]# xm li
>>> Name                                      ID Mem(MiB) VCPUs State   Time(s)
>>> Domain-0                                   0     3470     2 r-----  28939.4
>>> guest1                                     7      511     1 -b----   7727.8
>>>
>>> o/p of xm li on cvtst2
>>>
>>> --
>>> [root@cvtst2 ~]# xm li
>>> Name                                      ID Mem(MiB) VCPUs State   Time(s)
>>> Domain-0                                   0     3470     2 r-----  31558.9
>>> guest1                                    21      511     1 -b----   7558.2
>>> ---
>>>
>>> Thanks
>>> Paras.
>>>
>>>
>>>
>>> On Fri, Sep 25, 2009 at 4:22 PM, brem belguebli
>>> <brem.belguebli@xxxxxxxxx> wrote:
>>>> It looks like no.
>>>>
>>>> can you send an output of clustat  of when the VM is running on
>>>> multiple nodes at the same time?
>>>>
>>>> And by the way, another one after having stopped (clusvcadm -s vm:guest1) ?
>>>>
>>>>
>>>>
>>>> 2009/9/25 Paras pradhan <pradhanparas@xxxxxxxxx>:
>>>>> Anyone having issue as mine? Virtual machine service is not being
>>>>> properly handled by the cluster.
>>>>>
>>>>>
>>>>> Thanks
>>>>> Paras.
>>>>>
>>>>> On Mon, Sep 21, 2009 at 9:55 AM, Paras pradhan <pradhanparas@xxxxxxxxx> wrote:
>>>>>> Ok.. here is my cluster.conf file
>>>>>>
>>>>>> --
>>>>>> [root@cvtst1 cluster]# more cluster.conf
>>>>>> <?xml version="1.0"?>
>>>>>> <cluster alias="test" config_version="9" name="test">
>>>>>>        <fence_daemon clean_start="0" post_fail_delay="0" post_join_delay="3"/>
>>>>>>        <clusternodes>
>>>>>>                <clusternode name="cvtst2" nodeid="1" votes="1">
>>>>>>                        <fence/>
>>>>>>                </clusternode>
>>>>>>                <clusternode name="cvtst1" nodeid="2" votes="1">
>>>>>>                        <fence/>
>>>>>>                </clusternode>
>>>>>>                <clusternode name="cvtst3" nodeid="3" votes="1">
>>>>>>                        <fence/>
>>>>>>                </clusternode>
>>>>>>        </clusternodes>
>>>>>>        <cman/>
>>>>>>        <fencedevices/>
>>>>>>        <rm>
>>>>>>                <failoverdomains>
>>>>>>                        <failoverdomain name="myfd1" nofailback="0" ordered="1" restricted="0">
>>>>>>                                <failoverdomainnode name="cvtst2" priority="3"/>
>>>>>>                                <failoverdomainnode name="cvtst1" priority="1"/>
>>>>>>                                <failoverdomainnode name="cvtst3" priority="2"/>
>>>>>>                        </failoverdomain>
>>>>>>                </failoverdomains>
>>>>>>                <resources/>
>>>>>>                <vm autostart="1" domain="myfd1" exclusive="0" max_restarts="0"
>>>>>> name="guest1" path="/vms" recovery="r
>>>>>> estart" restart_expire_time="0"/>
>>>>>>        </rm>
>>>>>> </cluster>
>>>>>> [root@cvtst1 cluster]#
>>>>>> ------
>>>>>>
>>>>>> Thanks!
>>>>>> Paras.
>>>>>>
>>>>>>
>>>>>> On Sun, Sep 20, 2009 at 9:44 AM, Volker Dormeyer <volker@xxxxxxxxxxxx> wrote:
>>>>>>> On Fri, Sep 18, 2009 at 05:08:57PM -0500,
>>>>>>> Paras pradhan <pradhanparas@xxxxxxxxx> wrote:
>>>>>>>> I am using cluster suite for HA of xen virtual machines. Now I am
>>>>>>>> having another problem. When I start the my xen vm in one node, it
>>>>>>>> also starts on other nodes. Which daemon controls  this?
>>>>>>>
>>>>>>> This is usually done bei clurgmgrd (which is part of the rgmanager
>>>>>>> package). To me, this sounds like a configuration problem. Maybe,
>>>>>>> you can post your cluster.conf?
>>>>>>>
>>>>>>> Regards,
>>>>>>> Volker
>>>>>>>
>>>>>>> --
>>>>>>> Linux-cluster mailing list
>>>>>>> Linux-cluster@xxxxxxxxxx
>>>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster
>>>>>>>
>>>>>>
>>>>>
>>>>> --
>>>>> Linux-cluster mailing list
>>>>> Linux-cluster@xxxxxxxxxx
>>>>> https://www.redhat.com/mailman/listinfo/linux-cluster
>>>>>
>>>>
>>>> --
>>>> Linux-cluster mailing list
>>>> Linux-cluster@xxxxxxxxxx
>>>> https://www.redhat.com/mailman/listinfo/linux-cluster
>>>>
>>>
>>> --
>>> Linux-cluster mailing list
>>> Linux-cluster@xxxxxxxxxx
>>> https://www.redhat.com/mailman/listinfo/linux-cluster
>>>
>>
>> --
>> Linux-cluster mailing list
>> Linux-cluster@xxxxxxxxxx
>> https://www.redhat.com/mailman/listinfo/linux-cluster
>>
>

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux