Re: failover domain and service start

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



please call me 60173623661 for linux system configuration.

Sent from Yahoo Mail for iPhone


From: emmanuel segura <emi2fast@xxxxxxxxx>;
To: linux clustering <linux-cluster@xxxxxxxxxx>;
Subject: Re: failover domain and service start
Sent: Thu, Dec 19, 2013 9:05:18 AM

other thing, change the rgmanager loglevel, for more information you can use this link https://fedorahosted.org/cluster/wiki/RGManager


2013/12/19 emmanuel segura <emi2fast@xxxxxxxxx>
grep "/vms_c" /proc/mounts in every node


2013/12/18 Paras pradhan <pradhanparas@xxxxxxxxx>
I see this:

Dec 18 16:17:18 cvtst3 clurgmgrd[13935]: <notice> Starting stopped service vm:guest1 
Dec 18 16:17:19 cvtst3 clurgmgrd[13935]: <notice> start on vm "guest1" returned 1 (generic error) 
Dec 18 16:17:19 cvtst3 clurgmgrd[13935]: <warning> #68: Failed to start vm:guest1; return value: 1 
Dec 18 16:17:19 cvtst3 clurgmgrd[13935]: <notice> Stopping service vm:guest1 



On Wed, Dec 18, 2013 at 4:39 PM, emmanuel segura <emi2fast@xxxxxxxxx> wrote:
can you show the log?


2013/12/18 Paras pradhan <pradhanparas@xxxxxxxxx>
Added but same problem. vm does not start.

-Paras.


On Wed, Dec 18, 2013 at 1:13 PM, emmanuel segura <emi2fast@xxxxxxxxx> wrote:
from script vm.sh i saw it try to discovery the hypervisor you are using, with  hypervisor="xen" you force the script to use xen


2013/12/18 Paras pradhan <pradhanparas@xxxxxxxxx>
The only parameter I don't have is:  hypervisor="xen"

Does it matter?

This is what i have:

<vm autostart="1" domain="myfd1" exclusive="0" max_restarts="0" name="guest1" path="/vms_c" recovery="restart" restart_expire_time="0" use_virsh="0"/>

-Paras.


On Wed, Dec 18, 2013 at 12:24 PM, emmanuel segura <emi2fast@xxxxxxxxx> wrote:
<vm name="guest1" hypervisor="xen" path="/vms_c" use_virsh="0">

Incriment the config version of cluster.conf and ccs_tool update /etc/cluster/cluster.conf


2013/12/18 Paras pradhan <pradhanparas@xxxxxxxxx>
Emmauel,

With export OCF_RESKEY_name="guest1" ; export OCF_RESKEY_path="/vms_c" ; export OCF_RESKEY_use_virsh=0

, I can start the vm using : /usr/share/cluster/vm.sh

I am wondering how to make the changes to cluser.conf or other files so that we can start the vm using clucvcsadm.

-Thanks
sorry for the delay.

Paras.


On Thu, Dec 5, 2013 at 12:36 PM, Paras pradhan <pradhanparas@xxxxxxxxx> wrote:
Emmanue no.l I was busy on some other things I will test and let you know asap !


On Thu, Dec 5, 2013 at 12:26 PM, emmanuel segura <emi2fast@xxxxxxxxx> wrote:
Hello Paras

did you solved the problem?

Thanks
Emmanuel


2013/11/25 emmanuel segura <emi2fast@xxxxxxxxx>
Hello Paras

Maybe i found the solution, in function validate_all we got

       if [ -z "$OCF_RESKEY_hypervisor" ] ||
           [ "$OCF_RESKEY_hypervisor" = "auto" ]; then
                export OCF_RESKEY_hypervisor="`virsh version | grep \"Running hypervisor:\" | awk '{print $3}' | tr A-Z a-z`"
                if [ -z "$OCF_RESKEY_hypervisor" ]; then
                        ocf_log err "Could not determine Hypervisor"
                        return $OCF_ERR_ARGS
                fi
                echo Hypervisor: $OCF_RESKEY_hypervisor
        fi

        #
        # Xen hypervisor only for when use_virsh = 0.
        #
        if [ "$OCF_RESKEY_use_virsh" = "0" ]; then
                if [ "$OCF_RESKEY_hypervisor" != "xen" ]; then
                        ocf_log err "Cannot use $OCF_RESKEY_hypervisor hypervisor without using virsh"
                        return $OCF_ERR_ARGS
                fi

with this following enviroment variables, when i tested by hand the angent uses xm commands

env | grep OCF
OCF_RESKEY_hypervisor=xen
OCF_RESKEY_path=/vms_c
OCF_RESKEY_name=guest1
OCF_RESKEY_use_virsh=0

[root@client ~]# /usr/share/cluster/vm.sh status
Management tool: xm
<err>    Cannot find 'xm'; is it installed?
[vm.sh] Cannot find 'xm'; is it installed?


I don't have xen installed to test it


                if [ -n "$OCF_RESKEY_xmlfile" ]; then
                        ocf_log err "Cannot use xmlfile if use_virsh is set to 0"
                        return $OCF_ERR_ARGS
                fi



2013/11/25 emmanuel segura <emi2fast@xxxxxxxxx>
Hello paras

missing the export command in front of variables, the correct way is this

export OCF_RESKEY_name="guest1" ; export OCF_RESKEY_path="/vms_c" ; export OCF_RESKEY_use_virsh=0
[root@client ~]# env | grep OCF
OCF_RESKEY_path=/vms_c
OCF_RESKEY_name=guest1
OCF_RESKEY_use_virsh=0



2013/11/25 emmanuel segura <emi2fast@xxxxxxxxx>
Hello Paras

I have  a centos 6, i don't know if it is different on redhat 5, but i saw in the script vm.sh calls do_start function when start parameter is given

do_start()
{
        if [ "$OCF_RESKEY_use_virsh" = "1" ]; then
                do_virsh_start $*
                return $?
        fi

        do_xm_start $*
        return $?
}

i don't know why because the vm.sh uses virsh when you launch the script by hand :(


2013/11/25 Paras pradhan <pradhanparas@xxxxxxxxx>
Looks like use_virsh=0 has no effect.

--
[root@cvtst3 ~]# export OCF_RESKEY_name="guest1" ; OCF_RESKEY_path="/vms_c" ; OCF_RESKEY_use_virsh=0
[root@cvtst3 ~]# set -x
++ printf '\033]0;%s@%s:%s\007' root cvtst3 '~'
[root@cvtst3 ~]# /usr/share/cluster/vm.sh start
+ /usr/share/cluster/vm.sh start
Hypervisor: xen
Management tool: virsh
Hypervisor URI: xen:///
Migration URI format: xenmigr://target_host/
Virtual machine guest1 is error: failed to get domain 'guest1'
error: Domain not found: xenUnifiedDomainLookupByName

<debug>  virsh -c xen:/// start guest1
error: failed to get domain 'guest1'
error: Domain not found: xenUnifiedDomainLookupByName

++ printf '\033]0;%s@%s:%s\007' root cvtst3 '~'
[root@cvtst3 ~]# set +x
+ set +x
---


-Paras.


On Fri, Nov 22, 2013 at 5:22 PM, emmanuel segura <emi2fast@xxxxxxxxx> wrote:
Hellos Paras

Stop the vm and retry to start the vm with following commands and if you got some error show it

export OCF_RESKEY_name="guest1" ; OCF_RESKEY_path="/vms_c" ; OCF_RESKEY_use_virsh=0


set -x
/usr/share/cluster/vm.sh start
set +x


2013/11/22 Paras pradhan <pradhanparas@xxxxxxxxx>
I found the workaround to my issue. What i did is:

run the vm using xm and then start using cluvscadm. This works for me for the time being but I am not sure what is causing this. This is what I did

xm create /vms_c/guest1
clusvcadm -e vm: guest1 ( This detects that guest1 is up and quickly changes its status to success)

Although i used virt-install, it also create a xem format configuration file and since use_virsh=0 it should be able to use this xen format config file. This is my vm configuration:

---
name = "guest1"
maxmem = 2048
memory = 512
vcpus = 1
#cpus="1-2"
bootloader = "/usr/bin/pygrub"
_on_poweroff_ = "destroy"
_on_reboot_ = "restart"
_on_crash_ = "restart"
vfb = [  ]
disk = [ "tap:aio:/vms_c/guest1.img,xvda,w", "tap:aio:/vms_c/guest1-disk.img,xvdb,w" ]
vif = [ "rate=10MB/s,mac=00:16:3e:6b:be:71,bridge=xenbr0" ]

---

Thanks for you help Emmanuel ! Really appreciate it.

-Paras.


On Fri, Nov 22, 2013 at 11:10 AM, emmanuel segura <emi2fast@xxxxxxxxx> wrote:
ok, but your vm doesn't start on others nodes, i think, for configuration problems
================================================================
Nov 21 15:40:29 vtst3 clurgmgrd[13911]: <notice> start on vm "guest1" returned 1 (generic error) 
Nov 21 15:40:29 vtst3 clurgmgrd[13911]: <warning> #68: Failed to start vm:guest1; return value: 1 
Nov 21 15:40:29 vtst3 clurgmgrd[13911]: <notice> Stopping service vm:guest1 
Nov 21 15:40:35 vtst3 clurgmgrd[13911]: <notice> Service vm:guest1 is recovering 
Nov 21 15:40:35 vtst3 clurgmgrd[13911]: <warning> #71: Relocating failed service vm:guest1 
Nov 21 15:40:35 vtst3 clurgmgrd[13911]: <notice> Service vm:guest1 is stopped
================================================================
in few words, try in every cluster node


export OCF_RESKEY_name="guest1" ; OCF_RESKEY_path="/vms_c"

set -x
/usr/share/cluster/vm.sh start
/usr/share/cluster/vm.sh stop

after you check if your vm can start and stop on every cluster node,

/usr/share/cluster/vm.sh start
/usr/share/cluster/vm.sh migrate name_of_a_cluster_node

can you show me your vm configuration under /vms_c?

Thanks
Emmanuel


2013/11/22 Paras pradhan <pradhanparas@xxxxxxxxx>
And also to test I made use_virsh=1 , same problem. The vm does not start up if the FD domains are offline. 

-Paras.


On Fri, Nov 22, 2013 at 10:37 AM, Paras pradhan <pradhanparas@xxxxxxxxx> wrote:
Well thats seems to theoretically correct. But right now my cluser has use_virsh=0 and I don't have any issue untill my mebmers on the failover domains are offline. So wondering what is it that clusvcadm -e is looking when I don't use virsh .




On Fri, Nov 22, 2013 at 10:05 AM, emmanuel segura <emi2fast@xxxxxxxxx> wrote:
If you used virt-install, i think you need use virsh, the cluster uses xm xen command if you got use_virsh=0 and virsh if you got use_virsh=1 in your cluster config


2013/11/22 Paras pradhan <pradhanparas@xxxxxxxxx>

I use virt-install to create virtual machines. Is there a way to debug why clusvcadm -e vm:guest1 is failing? vm.sh  seems to use virsh and my cluster.conf has use_virsh=0


Thanks

Paras.


On Nov 21, 2013 5:53 PM, "emmanuel segura" <emi2fast@xxxxxxxxx> wrote:

but did you configure your vm with xen tools or using virt-manager?


2013/11/22 Paras pradhan <pradhanparas@xxxxxxxxx>
Well no i don't want to use virsh.   But as we are debugging with virsh now i found a strange issue.

I exported an xml file and imported to all nodes . Ran


---
name="guest1" path="/vms_c"

export OCF_RESKEY_name="guest1" ; OCF_RESKEY_path="/vms_c"

set -x
/usr/share/cluster/vm.sh start
set +x

--
vm starts now. BUT from a cluster service : cluvscam -e vm:guest1 , same error.


So if i populate all my domains' config files to all my cluser nodes and make use_virsh=1, then the issue is resolved. But this is a lot of work for those who have hundreds of vm.

vm.start uses virsh . Is there a way to tell him not use virsh?


Thanks
Paras.


On Thu, Nov 21, 2013 at 5:19 PM, emmanuel segura <emi2fast@xxxxxxxxx> wrote:
if you are using virsh for manage your vms, change this in your cluster.conf

from
use_virsh="0"
to
use_virsh="1"


2013/11/22 Paras pradhan <pradhanparas@xxxxxxxxx>
I think i found the problem. 

virsh list --all does not show my vm . This is because it was created on another node. and another node has it. Now I want to start the service on a different node in which it was not created or where virsh list --all does not have an entry. Is it possible to create this entry using a xen config file?Looks like this is now a Xen issue rather than a linux-cluster issue . :)

Paras.



On Thu, Nov 21, 2013 at 4:58 PM, emmanuel segura <emi2fast@xxxxxxxxx> wrote:
1:did you verify your xen livemigration configuration?
2: where you vm disk reside?
3: can you see your vm defined on every cluster node with xm list?


2013/11/21 Paras pradhan <pradhanparas@xxxxxxxxx>
This is what I get

Hypervisor: xen
Management tool: virsh
Hypervisor URI: xen:///
Migration URI format: xenmigr://target_host/
Virtual machine guest1 is error: failed to get domain 'guest1'
error: Domain not found: xenUnifiedDomainLookupByName

<debug>  virsh -c xen:/// start guest1
error: failed to get domain 'guest1'
error: Domain not found: xenUnifiedDomainLookupByName

++ printf '\033]0;%s@%s:%s\007' root vtst3 '~'
[root@cvtst3 ~]# set +x
+ set +x


--


I am wondering why it failed to get domain .


-Paras.



On Thu, Nov 21, 2013 at 4:43 PM, emmanuel segura <emi2fast@xxxxxxxxx> wrote:
yes


2013/11/21 Paras pradhan <pradhanparas@xxxxxxxxx>
Well it is guest1. Isn't it?.

<vm autostart="1" domain="myfd1" exclusive="0" max_restarts="0" name="guest1" path="/vms_c" recovery="restart" restart_expire_time="0" use_virsh="0"/>

It is a vm service if it matters.

-Paras.




On Thu, Nov 21, 2013 at 4:22 PM, emmanuel segura <emi2fast@xxxxxxxxx> wrote:
use the servicename you defined in your cluster.conf


2013/11/21 Paras pradhan <pradhanparas@xxxxxxxxx>
Says:

Running in test mode.
No resource guest1 of type service found

-Paras.


On Thu, Nov 21, 2013 at 4:07 PM, emmanuel segura <emi2fast@xxxxxxxxx> wrote:
rg_test test /etc/cluster/cluster.conf start service guest1


2013/11/21 Paras pradhan <pradhanparas@xxxxxxxxx>
Hi,

My failover domain looks like this:

<failoverdomain name="myfd1" nofailback="1" ordered="1" restricted="0">
                                <failoverdomainnode name="vtst1" priority="1"/>
                                <failoverdomainnode name="vtst3" priority="2"/>
                                <failoverdomainnode name="vtst2" priority="3"/>


                        </failoverdomain>


I have vm service that uses this failover domain. If my node vtst1 is offline, the service doesnot start on vtst3 which is 2nd in the priority.

I tried to start it with: clusvcadm -e vm:guest1   and even with -F and -m option.

All i see is this error:

Nov 21 15:40:29 vtst3 clurgmgrd[13911]: <notice> start on vm "guest1" returned 1 (generic error) 
Nov 21 15:40:29 vtst3 clurgmgrd[13911]: <warning> #68: Failed to start vm:guest1; return value: 1 
Nov 21 15:40:29 vtst3 clurgmgrd[13911]: <notice> Stopping service vm:guest1 
Nov 21 15:40:35 vtst3 clurgmgrd[13911]: <notice> Service vm:guest1 is recovering 
Nov 21 15:40:35 vtst3 clurgmgrd[13911]: <warning> #71: Relocating failed service vm:guest1 
Nov 21 15:40:35 vtst3 clurgmgrd[13911]: <notice> Service vm:guest1 is stopped 


How do I debug?
Thanks!
Paras.

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster



--
esta es mi vida e me la vivo hasta que dios quiera

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster


--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster



--
esta es mi vida e me la vivo hasta que dios quiera

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster


--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster



--
esta es mi vida e me la vivo hasta que dios quiera

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster


--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster



--
esta es mi vida e me la vivo hasta que dios quiera

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster


--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster



--
esta es mi vida e me la vivo hasta que dios quiera

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster


--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster



--
esta es mi vida e me la vivo hasta que dios quiera

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster



--
esta es mi vida e me la vivo hasta que dios quiera

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster



--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster



--
esta es mi vida e me la vivo hasta que dios quiera

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster


--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster



--
esta es mi vida e me la vivo hasta que dios quiera

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster


--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster



--
esta es mi vida e me la vivo hasta que dios quiera



--
esta es mi vida e me la vivo hasta que dios quiera



--
esta es mi vida e me la vivo hasta que dios quiera



--
esta es mi vida e me la vivo hasta que dios quiera

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster



--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster



--
esta es mi vida e me la vivo hasta que dios quiera

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster


--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster



--
esta es mi vida e me la vivo hasta que dios quiera

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster


--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster



--
esta es mi vida e me la vivo hasta que dios quiera

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster


--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster



--
esta es mi vida e me la vivo hasta que dios quiera



--
esta es mi vida e me la vivo hasta que dios quiera
-- 
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux