ceph-deploy disk activate error msg

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Adding ceph-users, back to the discussion.

Can you tell me if `ceph-deploy admin cephosd02` was what worked or if
it was the scp'ing of keys?

On Wed, Aug 6, 2014 at 12:36 PM, German Anders <ganders at despegar.com> wrote:
> It work!!! :) thanks a lot Alfredo. I want to ask also if you know how can I
> remove a osd server from the osd tree:
>
> ceph at cephmon01:~$ ceph osd tree
> # id    weight    type name    up/down    reweight
> -1    24.57    root default
> -2    21.84        host cephosd01
> 0    2.73            osd.0    down    0
> 1    2.73            osd.1    down    0
> 2    2.73            osd.2    down    0
> 3    2.73            osd.3    down    0
> 4    2.73            osd.4    down    0
> 5    2.73            osd.5    down    0
> 6    2.73            osd.6    down    0
> 7    2.73            osd.7    down    0
> -3    0        host cephosd03
> -4    2.73        host cephosd02
> 8    2.73            osd.8    down    0
>
> I want to remove host "cephosd03" from the tree
>
> Thanks a lot!!
>
> Best regards,
>
>
> German Anders
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> --- Original message ---
> Asunto: Re: [ceph-users] ceph-deploy disk activate error msg
> De: Alfredo Deza <alfredo.deza at inktank.com>
> Para: German Anders <ganders at despegar.com>
> Fecha: Wednesday, 06/08/2014 13:32
>
> On Wed, Aug 6, 2014 at 12:23 PM, German Anders <ganders at despegar.com> wrote:
>
> Unfortunatly, after upgrade the ceph-deploy version I'm still facing the
> problem:
>
>
> It is possible that you may have invalid keyrings... have you
> tried/retried the setup more than once? Or is that one host
> complaining
> from scratch?
>
> You could try and see if copying the keys from the monitor node helps:
>
> 1) scp /etc/ceph/ceph.c lient.admin.keyring cephosd02:/etc/ceph
> 2) scp /var/lib/ceph/bootstrap-osd/ceph.keyring
> cephosd02:/var/lib/ceph/bootstrap-osd
>
> I think you could try with ceph-deploy as well, with `ceph-deploy
> admin cephosd02`
>
>
>
>
> ceph at cephdeploy01:~/ceph-deploy$ sudo dpkg -s ceph-deploy
>
> Package: ceph-deploy
> Status: install ok installed
> Priority: optional
> Section: admin
> Installed-Size: 437
> Maintainer: Sage Weil <sage at newdream.net>
> Architecture: all
> Version: 1.5.10trusty
> Depends: python (>= 2.7), python-argparse, python-setuptools, python (<<
> 2.8), python:any (>= 2.7.1-0ubuntu2), python-pkg-resources
> Description: Ceph-deploy is an easy to use configuration tool
>
>    for the Ceph distributed storage system.
>    .
>    This package includes the programs and libraries to support
>    simple ceph cluster deployment.
> Homepage: http://ceph.com/
>
>
>
> [ceph_deploy.osd][DEBUG ] Activating cluster ceph disks
> cephosd02:/dev/sdd1:/dev/sde1
> [cephosd02][DEBUG ] connected to host: cephosd02
> [cephosd02][DEBUG ] detect platform information from remote host
> [cephosd02][DEBUG ] detect machine type
> [ceph_deploy.osd][INFO ] Distro info: Ubuntu 14.04 trusty
> [ceph_deploy.osd][DEBUG ] activating host cephosd02 disk /dev/sdd1
> [ceph_deploy.osd][DEBUG ] will use init type: upstart
> [cephosd02][INFO ] Running command: sudo ceph-disk -v activate --mark-init
> upstart --mount /dev/sdd1
> [cephosd02][WARNIN] INFO:ceph-disk:Running command: /sbin/blkid -p -s TYPE
> -ovalue -- /dev/sdd1
> [cephosd02][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf
> --cluster=ceph --name=osd. --lookup osd_mount_options_btrfs
> [cephosd02][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf
> --cluster=ceph --name=osd. --lookup osd_fs_mount_options_btrfs
> [cephosd02][WARNIN] DEBUG:ceph-disk:Mounting /dev/sdd1 on
> /var/lib/ceph/tmp/mnt.tG9uYV with options noatime,user_subvol_rm_allowed
> [cephosd02][WARNIN] INFO:ceph-disk:Running command: /bin/mount -t btrfs -o
> noatime,user_subvol_rm_allowed -- /dev/sdd1 /var/lib/ceph/tmp/mnt.tG9uYV
> [cephosd02][WARNIN] DEBUG:ceph-disk:Cluster uuid is
> 40137481-b22c-4b47-b6f7-9f160e81d896
> [cephosd02][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd
> --cluster=ceph --show-config-value=fsid
> [cephosd02][WARNIN] DEBUG:ceph-disk:Cluster name is ceph
> [cephosd02][WARNIN] DEBUG:ceph-disk:OSD uuid is
> 2996a04b-3966-4a9c-ac91-5639c998b40a
> [cephosd02][WARNIN] DEBUG:ceph-disk:Allocating OSD id...
> [cephosd02][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph --cluster
> ceph --name client.bootstrap-osd --keyring
> /var/lib/ceph/bootstrap-osd/ceph.keyring osd create --concise
> 2996a04b-3966-4a9c-ac91-5639c998b40a
> [cephosd02][WARNIN] 2014-08-06 12:22:15.287950 7f2bfc436700 0 librados:
> client.bootstrap-osd authentication error (1) Operation not permitted
>
> [cephosd02][WARNIN] Error connecting to cluster: PermissionError
> [cephosd02][WARNIN] ERROR:ceph-disk:Failed to activate
> [cephosd02][WARNIN] DEBUG:ceph-disk:Unmounting /var/lib/ceph/tmp/mnt.tG9uYV
> [cephosd02][WARNIN] INFO:ceph-disk:Running command: /bin/umount --
> /var/lib/ceph/tmp/mnt.tG9uYV
>
> [cephosd02][WARNIN] ceph-disk: Error: ceph osd create failed: Command
> '/usr/bin/ceph' returned non-zero exit status 1:
> [cephosd02][ERROR ] RuntimeError: command returned non-zero exit status: 1
> [ceph_deploy][ERROR ] RuntimeError: Failed to execute command: ceph-disk -v
> activate --mark-init upstart --mount /dev/sdd1
>
>
>
>
> German Anders
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> --- Original message ---
> Asunto: Re: [ceph-users] ceph-deploy disk activate error msg
> De: Alfredo Deza <alfredo.deza at inktank.com>
> Para: German Anders <ganders at despegar.com>
> Fecha: Wednesday, 06/08/2014 13:18
>
> On Wed, Aug 6, 2014 at 12:04 PM, German Anders <ganders at despegar.com> wrote:
>
> Hi Alfredo,
>                How are you? First of all I want to thank you for the quick
> response. I'm using Version: 1.4.0-0ubuntu1, but when I try to upgrade the
> ceph-deploy is said it doesn't have any updates.
>
>
> You are probably using the version that is supported by Ubuntu itself.
> Here is a quick way to get it directly from our repos:
>
> http://ceph.com/docs/master/start/quick-start-preflight/#advanced-package-tool-apt
>
> Can you try that and see if you can upgrade?
>
>
> ceph at cephdeploy01:~/ceph-deploy$ dpkg -s ceph-deploy
> Package: ceph-deploy
> Status: install ok installed
> Priority: optional
> Section: admin
> Installed-Size: 415
> Maintainer: Ubuntu Developers <ubuntu-devel-discuss at lists.ubuntu.com>
> Architecture: all
> Version: 1.4.0-0ubuntu1
> Depends: python (>= 2.7), python (<< 2.8), python:any (>= 2.7.1-0ubuntu2),
> python-pkg-resources
> Description: Deployment and configuration of Ceph.
>      Ceph-deploy is an easy to use deployment and configuration
>      tool for the Ceph distributed storage system.
>      .
>      This package includes the programs and libraries to support
>      simple ceph cluster deployment.
> Homepage: http://ceph.com/
> Original-Maintainer: Sage Weil <sage at newdream.net>
>
>
>
>
>
> German Anders
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> --- Original message ---
> Asunto: Re: [ceph-users] ceph-deploy disk activate error msg
> De: Alfredo Deza <alfredo.deza at inktank.com>
> Para: German Anders <ganders at despegar.com>
> Cc: ceph-users at lists.ceph.com <ceph-users at lists.ceph.com>
> Fecha: Wednesday, 06/08/2014 12:58
>
> On Wed, Aug 6, 2014 at 11:23 AM, German Anders <ganders at despegar.com> wrote:
>
> Hi to all,
>             I'm having some issues while trying to deploy a osd with btrfs:
>
> ceph at cephdeploy01:~/ceph-deploy$ ceph-deploy disk activate --fs-type btrfs
> cephosd02:sdd1:/dev/sde1
> [ceph_deploy.cli][INFO ] Invoked (1.4.0): /usr/bin/ceph-deploy disk
> activate --fs-type btrfs cephosd02:sdd1:/dev/sde1
> [ceph_deploy.osd][DEBUG ] Activating cluster ceph disks
> cephosd02:/dev/sdd1:/dev/sde1
> [cephosd02][DEBUG ] connected to host: cephosd02
> [cephosd02][DEBUG ] detect platform information from remote host
> [cephosd02][DEBUG ] detect machine type
> [ceph_deploy.osd][INFO ] Distro info: Ubuntu 14.04 trusty
> [ceph_deploy.osd][DEBUG ] activating host cephosd02 disk /dev/sdd1
> [ceph_deploy.osd][DEBUG ] will use init type: upstart
> [cephosd02][INFO ] Running command: sudo ceph-disk-activate --mark-init
> upstart --mount /dev/sdd1
> [cephosd02][WARNIN] 2014-08-06 11:22:02.106327 7f0188c96700 0 librados:
> client.bootstrap-osd authentication error (1) Operation not permitted
> [cephosd02][WARNIN] Error connecting to cluster: PermissionError
> [cephosd02][WARNIN] ERROR:ceph-disk:Failed to activate
> [cephosd02][WARNIN] ceph-disk: Error: ceph osd create failed: Command
> '/usr/bin/ceph' returned non-zero exit status 1:
> [cephosd02][ERROR ] RuntimeError: command returned non-zero exit status: 1
> [ceph_deploy][ERROR ] RuntimeError: Failed to execute command:
> ceph-disk-activate --mark-init upstart --mount /dev/sdd1
>
>
> Can you try with the latest ceph-deploy (1.5.10 as of this writing) ?
>
> And then paste the output of that, hopefully this is something that
> was addressed!
>
>
> It seems that it has something to do with the permissions, I've also try to
> run the command manually on the osd server, but getting the same error
> message. Any ideas?
>
> Thanks in advance,
>
> Best regards,
>
>
> German Anders
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users at lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
>
>


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux