Re: gdeploy, Centos7 & Ansible 2.3

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 05/06/2017 03:14 PM, hvjunk wrote:
Hi there,

 So, busy testing/installing/etc. and was pointed last night in the direction of gdeploy. I did a quick try on Ubuntu 16.04, found some module related troubles, so I retried on Centos 7 this morning.

Seems that the playbooks aren’t 2.3 “compatible”…

The brick VMs are setup using the set-vms-centos.sh & sshkeys-centos.yml playbook from https://bitbucket.org/dismyne/gluster-ansibles/src/24b62dcc858364ee3744d351993de0e8e35c2680/?at=Centos-gdeploy-tests

The “installation”/gdeploy VM run:

The relevant history output:

   18  yum install epel-release
   19  yum install ansible
   20  yum search gdeploy
   22  vi t.conf
   23  gdeploy -c t.conf
   24  history
   25  mkdir .ssh
   26  cd .ssh
   27  ls
   28  vi id_rsa
   29  chmod 0600 id_rsa
   30  cd
   31  gdeploy -c t.conf
   32  ssh -v 10.10.10.11
   33  ssh -v 10.10.10.12
   34  ssh -v 10.10.10.13
   35  gdeploy -c t.conf

The t.conf:
===<snip>===
 [hosts]
10.10.10.11
10.10.10.12
10.10.10.13

[backend-setup]
devices=/dev/sdb
mountpoints=/gluster/brick1
brick_dirs=/gluster/brick1/one

===<snip>===

The gdeploy run:

==<snip>===
[root@linked-clone-of-centos-linux ~]# gdeploy -c t.conf
ERROR! no action detected in task. This often indicates a misspelled module name, or incorrect module path.

The error appears to have been in '/tmp/tmpezTsyO/pvcreate.yml': line 16, column 5, but may
be elsewhere in the file depending on the exact syntax problem.

The offending line appears to be:

  # Create pv on all the disks
  - name: Create Physical Volume
    ^ here


The error appears to have been in '/tmp/tmpezTsyO/pvcreate.yml': line 16, column 5, but may
be elsewhere in the file depending on the exact syntax problem.

The offending line appears to be:

  # Create pv on all the disks
  - name: Create Physical Volume
    ^ here

Ignoring errors...
ERROR! no action detected in task. This often indicates a misspelled module name, or incorrect module path.

The error appears to have been in '/tmp/tmpezTsyO/vgcreate.yml': line 8, column 5, but may
be elsewhere in the file depending on the exact syntax problem.

The offending line appears to be:

  tasks:
  - name: Create volume group on the disks
    ^ here


The error appears to have been in '/tmp/tmpezTsyO/vgcreate.yml': line 8, column 5, but may
be elsewhere in the file depending on the exact syntax problem.

The offending line appears to be:

  tasks:
  - name: Create volume group on the disks
    ^ here

Ignoring errors...
ERROR! no action detected in task. This often indicates a misspelled module name, or incorrect module path.

The error appears to have been in '/tmp/tmpezTsyO/auto_lvcreate_for_gluster.yml': line 7, column 5, but may
be elsewhere in the file depending on the exact syntax problem.

The offending line appears to be:

  tasks:
  - name: Create logical volume named metadata
    ^ here


The error appears to have been in '/tmp/tmpezTsyO/auto_lvcreate_for_gluster.yml': line 7, column 5, but may
be elsewhere in the file depending on the exact syntax problem.

The offending line appears to be:

  tasks:
  - name: Create logical volume named metadata
    ^ here

Ignoring errors...

PLAY [gluster_servers] **********************************************************************************************************************************************************************

TASK [Create a xfs filesystem] **************************************************************************************************************************************************************
failed: [10.10.10.13] (item=/dev/GLUSTER_vg1/GLUSTER_lv1) => {"failed": true, "item": "/dev/GLUSTER_vg1/GLUSTER_lv1", "msg": "Device /dev/GLUSTER_vg1/GLUSTER_lv1 not found."}
failed: [10.10.10.12] (item=/dev/GLUSTER_vg1/GLUSTER_lv1) => {"failed": true, "item": "/dev/GLUSTER_vg1/GLUSTER_lv1", "msg": "Device /dev/GLUSTER_vg1/GLUSTER_lv1 not found."}
failed: [10.10.10.11] (item=/dev/GLUSTER_vg1/GLUSTER_lv1) => {"failed": true, "item": "/dev/GLUSTER_vg1/GLUSTER_lv1", "msg": "Device /dev/GLUSTER_vg1/GLUSTER_lv1 not found."}
to retry, use: --limit @/tmp/tmpezTsyO/fscreate.retry

PLAY RECAP **********************************************************************************************************************************************************************************
10.10.10.11                : ok=0    changed=0    unreachable=0    failed=1
10.10.10.12                : ok=0    changed=0    unreachable=0    failed=1
10.10.10.13                : ok=0    changed=0    unreachable=0    failed=1

Ignoring errors...

PLAY [gluster_servers] **********************************************************************************************************************************************************************

TASK [Create the backend disks, skips if present] *******************************************************************************************************************************************
changed: [10.10.10.12] => (item={u'device': u'/dev/GLUSTER_vg1/GLUSTER_lv1', u'path': u'/gluster/brick1'})
changed: [10.10.10.11] => (item={u'device': u'/dev/GLUSTER_vg1/GLUSTER_lv1', u'path': u'/gluster/brick1'})
changed: [10.10.10.13] => (item={u'device': u'/dev/GLUSTER_vg1/GLUSTER_lv1', u'path': u'/gluster/brick1'})

TASK [Mount the volumes] ********************************************************************************************************************************************************************
failed: [10.10.10.11] (item={u'device': u'/dev/GLUSTER_vg1/GLUSTER_lv1', u'path': u'/gluster/brick1'}) => {"failed": true, "item": {"device": "/dev/GLUSTER_vg1/GLUSTER_lv1", "path": "/gluster/brick1"}, "msg": "Error mounting /gluster/brick1: mount: special device /dev/GLUSTER_vg1/GLUSTER_lv1 does not exist\n"}
failed: [10.10.10.12] (item={u'device': u'/dev/GLUSTER_vg1/GLUSTER_lv1', u'path': u'/gluster/brick1'}) => {"failed": true, "item": {"device": "/dev/GLUSTER_vg1/GLUSTER_lv1", "path": "/gluster/brick1"}, "msg": "Error mounting /gluster/brick1: mount: special device /dev/GLUSTER_vg1/GLUSTER_lv1 does not exist\n"}
failed: [10.10.10.13] (item={u'device': u'/dev/GLUSTER_vg1/GLUSTER_lv1', u'path': u'/gluster/brick1'}) => {"failed": true, "item": {"device": "/dev/GLUSTER_vg1/GLUSTER_lv1", "path": "/gluster/brick1"}, "msg": "Error mounting /gluster/brick1: mount: special device /dev/GLUSTER_vg1/GLUSTER_lv1 does not exist\n"}
to retry, use: --limit @/tmp/tmpezTsyO/mount.retry

PLAY RECAP **********************************************************************************************************************************************************************************
10.10.10.11                : ok=1    changed=1    unreachable=0    failed=1
10.10.10.12                : ok=1    changed=1    unreachable=0    failed=1
10.10.10.13                : ok=1    changed=1    unreachable=0    failed=1

Ignoring errors…

===<snip>===



_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://lists.gluster.org/mailman/listinfo/gluster-users

Hi,

    There is a new version of gdeploy built where the above seen issues are fixed. Can you please update gdeploy to the version below [1] and run the test again ?

[1] https://copr-be.cloud.fedoraproject.org/results/sac/gdeploy/epel-7-x86_64/00547404-gdeploy/gdeploy-2.0.2-6.noarch.rpm

Thanks

kasturi

_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://lists.gluster.org/mailman/listinfo/gluster-users

[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux