Re: Fwd: Re: Ubuntu xenial and ceph jewel systemd

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2016-04-11 18:15, hp cre wrote:
> ---------- Forwarded message ----------
> From: "hp cre" <hpcre1@xxxxxxxxx <mailto:hpcre1@xxxxxxxxx>>
> Date: 11 Apr 2016 15:50
> Subject: Re:  Ubuntu xenial and ceph jewel systemd
> To: "James Page" <james.page@xxxxxxxxxx <mailto:james.page@xxxxxxxxxx>>
> Cc:
> 
> Here is exactly what has been done (just started from scratch today):
> 
> 1- install default xenial beta 2
> 
> 2- run apt-get update && apt-get dist-upgrade (this step was not done on
> first trial)
> after update, got warning as follows:
> "W: plymouth: The plugin label.so is missing, the selected theme might
> not work as expected.
> W: plymouth: You might want to install the plymouth-themes and
> plymouth-label package to fix this.
> W: mdadm: /etc/mdadm/mdadm.conf defines no arrays.
> "
> so i ran apt-get -y install plymouth-themes
> 
> 3- wget
> http://download.ceph.com/debian-jewel/pool/main/c/ceph-deploy/ceph-deploy_1.5.31_all.deb

Did you try with the xenial ceph-deploy package? No need to wget,
they're right in the repository

# apt-get install ceph-deploy


> 4- dpkg -i ceph-deploy_1.5.31_all.deb
> got errors of unmet dependencies, so i ran apt-get -f install. this
> installed all missing packages.
> 
> 5- followed ceph docs preflight checklist (sudo file, ssh config file,
> ssh-copy-id, install ntp)
> 
> Followed the storage cluster quick start guide
> 
> 6- ceph-deploy new xen1 (first node) --> all ok
> 
> 7-  edit ceph.conf --> osd pool default size = 2
> 
> 8- ceph-deploy install --release=jewel xen1 --> all ok (this time it
> installed jewel 10.1.1, yesterday it was 10.1.0)
> 
> 9- ceph-deploy mon create-initial --> same error:
> 
> wes@xen1:~/cl$ ceph-deploy mon create-initial
> [ceph_deploy.conf][DEBUG ] found configuration file at:
> /home/wes/.cephdeploy.conf
> [ceph_deploy.cli][INFO  ] Invoked (1.5.31): /usr/bin/ceph-deploy mon
> create-initial
> [ceph_deploy.cli][INFO  ] ceph-deploy options:
> [ceph_deploy.cli][INFO  ]  username                      : None
> [ceph_deploy.cli][INFO  ]  verbose                       : False
> [ceph_deploy.cli][INFO  ]  overwrite_conf                : False
> [ceph_deploy.cli][INFO  ]  subcommand                    : create-initial
> [ceph_deploy.cli][INFO  ]  quiet                         : False
> [ceph_deploy.cli][INFO  ]  cd_conf                       :
> <ceph_deploy.conf.cephdeploy.Conf instance at 0x7ffb88bfed88>
> [ceph_deploy.cli][INFO  ]  cluster                       : ceph
> [ceph_deploy.cli][INFO  ]  func                          : <function mon
> at 0x7ffb88bdcf50>
> [ceph_deploy.cli][INFO  ]  ceph_conf                     : None
> [ceph_deploy.cli][INFO  ]  keyrings                      : None
> [ceph_deploy.cli][INFO  ]  default_release               : False
> [ceph_deploy.mon][DEBUG ] Deploying mon, cluster ceph hosts xen1
> [ceph_deploy.mon][DEBUG ] detecting platform for host xen1 ...
> [xen1][DEBUG ] connection detected need for sudo
> [xen1][DEBUG ] connected to host: xen1
> [xen1][DEBUG ] detect platform information from remote host
> [xen1][DEBUG ] detect machine type
> [xen1][DEBUG ] find the location of an executable
> [ceph_deploy.mon][INFO  ] distro info: Ubuntu 16.04 xenial
> [xen1][DEBUG ] determining if provided host has same hostname in remote
> [xen1][DEBUG ] get remote short hostname
> [xen1][DEBUG ] deploying mon to xen1
> [xen1][DEBUG ] get remote short hostname
> [xen1][DEBUG ] remote hostname: xen1
> [xen1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
> [xen1][DEBUG ] create the mon path if it does not exist
> [xen1][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-xen1/done
> [xen1][DEBUG ] done path does not exist: /var/lib/ceph/mon/ceph-xen1/done
> [xen1][INFO  ] creating keyring file:
> /var/lib/ceph/tmp/ceph-xen1.mon.keyring
> [xen1][DEBUG ] create the monitor keyring file
> [xen1][INFO  ] Running command: sudo ceph-mon --cluster ceph --mkfs -i
> xen1 --keyring /var/lib/ceph/tmp/ceph-xen1.mon.keyring --setuser 64045
> --setgroup 64045
> [xen1][DEBUG ] ceph-mon: mon.noname-a 192.168.56.10:6789/0
> <http://192.168.56.10:6789/0> is local, renaming to mon.xen1
> [xen1][DEBUG ] ceph-mon: set fsid to d56c2ad9-dc66-4b6a-b269-e32eecc05571
> [xen1][DEBUG ] ceph-mon: created monfs at /var/lib/ceph/mon/ceph-xen1
> for mon.xen1
> [xen1][INFO  ] unlinking keyring file
> /var/lib/ceph/tmp/ceph-xen1.mon.keyring
> [xen1][DEBUG ] create a done file to avoid re-doing the mon deployment
> [xen1][DEBUG ] create the init path if it does not exist
> [xen1][INFO  ] Running command: sudo initctl emit ceph-mon cluster=ceph
> id=xen1
> [xen1][ERROR ] Traceback (most recent call last):
> [xen1][ERROR ]   File
> "/usr/lib/python2.7/dist-packages/ceph_deploy/lib/vendor/remoto/process.py",
> line 119, in run
> [xen1][ERROR ]     reporting(conn, result, timeout)
> [xen1][ERROR ]   File
> "/usr/lib/python2.7/dist-packages/ceph_deploy/lib/vendor/remoto/log.py",
> line 13, in reporting
> [xen1][ERROR ]     received = result.receive(timeout)
> [xen1][ERROR ]   File
> "/usr/lib/python2.7/dist-packages/ceph_deploy/lib/vendor/remoto/lib/vendor/execnet/gateway_base.py",
> line 704, in receive
> [xen1][ERROR ]     raise self._getremoteerror() or EOFError()
> [xen1][ERROR ] RemoteError: Traceback (most recent call last):
> [xen1][ERROR ]   File
> "/usr/lib/python2.7/dist-packages/ceph_deploy/lib/vendor/remoto/lib/vendor/execnet/gateway_base.py",
> line 1036, in executetask
> [xen1][ERROR ]     function(channel, **kwargs)
> [xen1][ERROR ]   File "<remote exec>", line 12, in _remote_run
> [xen1][ERROR ]   File "/usr/lib/python2.7/subprocess.py", line 711, in
> __init__
> [xen1][ERROR ]     errread, errwrite)
> [xen1][ERROR ]   File "/usr/lib/python2.7/subprocess.py", line 1340, in
> _execute_child
> [xen1][ERROR ]     raise child_exception
> [xen1][ERROR ] OSError: [Errno 2] No such file or directory
> [xen1][ERROR ]
> [xen1][ERROR ]
> [ceph_deploy.mon][ERROR ] Failed to execute command: initctl emit
> ceph-mon cluster=ceph id=xen1
> [ceph_deploy][ERROR ] GenericError: Failed to create 1 monitors
> 
> 
> On 11 April 2016 at 14:05, hp cre <hpcre1@xxxxxxxxx
> <mailto:hpcre1@xxxxxxxxx>> wrote:
> 
>     In the process of reproducing it now. I'll attach a full command log
> 
>     On 11 Apr 2016 11:42, "James Page" <james.page@xxxxxxxxxx
>     <mailto:james.page@xxxxxxxxxx>> wrote:
> 
>         It would be handy to get visibility of your deployment log data;
>         I'm not currently able to reproduce your issue deploying ceph
>         using ceph-deploy on a small three node install running xenial -
>         its correctly detecting systemd and using systemctl instead of
>         initctl.
> 
>         On Mon, 11 Apr 2016 at 10:18 James Page <james.page@xxxxxxxxxx
>         <mailto:james.page@xxxxxxxxxx>> wrote:
> 
>             On Mon, 11 Apr 2016 at 10:02 hp cre <hpcre1@xxxxxxxxx
>             <mailto:hpcre1@xxxxxxxxx>> wrote:
> 
>                 Hello James,
> 
>                 It's a default install of xenial server beta 2 release.
>                 Created a user then followed the ceph installation quick
>                 start exactly as it is.
> 
>                 Ceph-deploy version 1.5.31 was used as follows
> 
>                 1- ceph-deploy new node1
>                 2- ceph-deploy install --release jewel  node1
>                 3- ceph-deploy mon create-initial
> 
>                 Step 3 gave error in Python scripts. Meaning it could
>                 not find initctl command. Searched for this command and
>                 found out our belongs to upstart.
> 
>             I suspect that ceph-deploy is not playing nicely with
>             systemd based Ubuntu releases - I'll take a look now..
> 
> 
> 
> 
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux