Re: ceph osd activate error

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Iban,

          Sure it is there. Ceph prepared was working properly and activate is through the error.

root@cephnode1~#df -Th
Filesystem                 Type      Size  Used Avail Use% Mounted on
/dev/vda2                  ext4      7.6G  2.2G  5.4G  29% /
devtmpfs                   devtmpfs  3.9G     0  3.9G   0% /dev
tmpfs                      tmpfs     3.9G     0  3.9G   0% /dev/shm
tmpfs                      tmpfs     3.9G  8.4M  3.9G   1% /run
tmpfs                      tmpfs     3.9G     0  3.9G   0% /sys/fs/cgroup
/dev/vda1                  ext4      9.5G  293M  9.1G   4% /var
/dev/vda5                  ext4      9.5G   37M  9.4G   1% /tmp
/dev/mapper/vg000-mysqlvol ext4      255G  5.1G  247G   3% /home
tmpfs                      tmpfs     782M     0  782M   0% /run/user/0


root@cephnode1/home/data/osd1#pwd
/home/data/osd1


root@cephnode1/home#ls -ld data/
drwxr-xr-x 3 ceph ceph 4096 Mar  1 14:08 data/
root@zoho-cephnode1/home#ls -ld data/osd1/
drwxr-xr-x 3 ceph ceph 4096 Mar  1 14:12 data/osd1/


Ceph Prepare

root@cephadmin~/mycluster#ceph-deploy osd prepare cephnode1:/home/data/osd1 cephnode2:/home/data/osd2 cephnode3:/home/data/osd3

[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf

[ceph_deploy.cli][INFO ] Invoked (1.5.37): /usr/bin/ceph-deploy osd prepare cephnode1:/home/data/osd1 cephnode2:/home/data/osd2 cephnode3:/home/data/osd3

[ceph_deploy.cli][INFO ] ceph-deploy options:

[ceph_deploy.cli][INFO ] username : None

[ceph_deploy.cli][INFO ] disk : [('cephnode1', '/home/data/osd1', None), ('cephnode2', '/home/data/osd2', None), ('cephnode3', '/home/data/osd3', None)]

[ceph_deploy.cli][INFO ] dmcrypt : False

[ceph_deploy.cli][INFO ] verbose : False

[ceph_deploy.cli][INFO ] bluestore : None

[ceph_deploy.cli][INFO ] overwrite_conf : False

[ceph_deploy.cli][INFO ] subcommand : prepare

[ceph_deploy.cli][INFO ] dmcrypt_key_dir : /etc/ceph/dmcrypt-keys

[ceph_deploy.cli][INFO ] quiet : False

[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0xbcc7a0>

[ceph_deploy.cli][INFO ] cluster : ceph

[ceph_deploy.cli][INFO ] fs_type : xfs

[ceph_deploy.cli][INFO ] func : <function osd at 0xbbc050>

[ceph_deploy.cli][INFO ] ceph_conf : None

[ceph_deploy.cli][INFO ] default_release : False

[ceph_deploy.cli][INFO ] zap_disk : False

[ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks cephnode1:/home/data/osd1: cephnode2:/home/data/osd2: cephnode3:/home/data/osd3:

**************************************************************************************************************************

WARNING: This system is a restricted access system. All activity on this system is subject to monitoring. If information collected reveals possible criminal activity or activity that exceeds privileges, evidence of such activity may be providedto the relevant authorities for further action.

By continuing past this point, you expressly consent to this monitoring.- ZOHO Corporation

**************************************************************************************************************************

**************************************************************************************************************************

WARNING: This system is a restricted access system. All activity on this system is subject to monitoring. If information collected reveals possible criminal activity or activity that exceeds privileges, evidence of such activity may be providedto the relevant authorities for further action.

By continuing past this point, you expressly consent to this monitoring.- ZOHO Corporation

**************************************************************************************************************************

[cephnode1][DEBUG ] connected to host: cephnode1

[cephnode1][DEBUG ] detect platform information from remote host

[cephnode1][DEBUG ] detect machine type

[cephnode1][DEBUG ] find the location of an executable

[ceph_deploy.osd][INFO ] Distro info: CentOS Linux 7.3.1611 Core

[ceph_deploy.osd][DEBUG ] Deploying osd to cephnode1

[cephnode1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf

[cephnode1][WARNIN] osd keyring does not exist yet, creating one

[cephnode1][DEBUG ] create a keyring file

[ceph_deploy.osd][DEBUG ] Preparing host cephnode1 disk /home/data/osd1 journal None activate False

[cephnode1][DEBUG ] find the location of an executable

[cephnode1][INFO ] Running command: /usr/sbin/ceph-disk -v prepare --cluster ceph --fs-type xfs -- /home/data/osd1

[cephnode1][WARNIN] command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid

[cephnode1][WARNIN] command: Running command: /usr/bin/ceph-osd --check-allows-journal -i 0 --cluster ceph

[cephnode1][WARNIN] command: Running command: /usr/bin/ceph-osd --check-wants-journal -i 0 --cluster ceph

[cephnode1][WARNIN] command: Running command: /usr/bin/ceph-osd --check-needs-journal -i 0 --cluster ceph

[cephnode1][WARNIN] command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size

[cephnode1][WARNIN] populate_data_path: Preparing osd data dir /home/data/osd1

[cephnode1][WARNIN] command: Running command: /usr/sbin/restorecon -R /home/data/osd1/ceph_fsid.3127.tmp

[cephnode1][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /home/data/osd1/ceph_fsid.3127.tmp

[cephnode1][WARNIN] command: Running command: /usr/sbin/restorecon -R /home/data/osd1/fsid.3127.tmp

[cephnode1][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /home/data/osd1/fsid.3127.tmp

[cephnode1][WARNIN] command: Running command: /usr/sbin/restorecon -R /home/data/osd1/magic.3127.tmp

[cephnode1][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /home/data/osd1/magic.3127.tmp

[cephnode1][INFO ] checking OSD status...

[cephnode1][DEBUG ] find the location of an executable

[cephnode1][INFO ] Running command: /bin/ceph --cluster=ceph osd stat --format=json

[ceph_deploy.osd][DEBUG ] Host cephnode1 is now ready for osd use.

**************************************************************************************************************************

WARNING: This system is a restricted access system. All activity on this system is subject to monitoring. If information collected reveals possible criminal activity or activity that exceeds privileges, evidence of such activity may be providedto the relevant authorities for further action.

By continuing past this point, you expressly consent to this monitoring.- ZOHO Corporation

**************************************************************************************************************************

**************************************************************************************************************************

WARNING: This system is a restricted access system. All activity on this system is subject to monitoring. If information collected reveals possible criminal activity or activity that exceeds privileges, evidence of such activity may be providedto the relevant authorities for further action.

By continuing past this point, you expressly consent to this monitoring.- ZOHO Corporation

**************************************************************************************************************************

[cephnode2][DEBUG ] connected to host: cephnode2

[cephnode2][DEBUG ] detect platform information from remote host

[cephnode2][DEBUG ] detect machine type

[cephnode2][DEBUG ] find the location of an executable

[ceph_deploy.osd][INFO ] Distro info: CentOS Linux 7.3.1611 Core

[ceph_deploy.osd][DEBUG ] Deploying osd to cephnode2

[cephnode2][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf

[cephnode2][WARNIN] osd keyring does not exist yet, creating one

[cephnode2][DEBUG ] create a keyring file

[ceph_deploy.osd][DEBUG ] Preparing host cephnode2 disk /home/data/osd2 journal None activate False

[cephnode2][DEBUG ] find the location of an executable

[cephnode2][INFO ] Running command: /usr/sbin/ceph-disk -v prepare --cluster ceph --fs-type xfs -- /home/data/osd2

[cephnode2][WARNIN] command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid

[cephnode2][WARNIN] command: Running command: /usr/bin/ceph-osd --check-allows-journal -i 0 --cluster ceph

[cephnode2][WARNIN] command: Running command: /usr/bin/ceph-osd --check-wants-journal -i 0 --cluster ceph

[cephnode2][WARNIN] command: Running command: /usr/bin/ceph-osd --check-needs-journal -i 0 --cluster ceph

[cephnode2][WARNIN] command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size

[cephnode2][WARNIN] populate_data_path: Preparing osd data dir /home/data/osd2

[cephnode2][WARNIN] command: Running command: /usr/sbin/restorecon -R /home/data/osd2/ceph_fsid.3160.tmp

[cephnode2][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /home/data/osd2/ceph_fsid.3160.tmp

[cephnode2][WARNIN] command: Running command: /usr/sbin/restorecon -R /home/data/osd2/fsid.3160.tmp

[cephnode2][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /home/data/osd2/fsid.3160.tmp

[cephnode2][WARNIN] command: Running command: /usr/sbin/restorecon -R /home/data/osd2/magic.3160.tmp

[cephnode2][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /home/data/osd2/magic.3160.tmp

[cephnode2][INFO ] checking OSD status...

[cephnode2][DEBUG ] find the location of an executable

[cephnode2][INFO ] Running command: /bin/ceph --cluster=ceph osd stat --format=json

[ceph_deploy.osd][DEBUG ] Host cephnode2 is now ready for osd use.

**************************************************************************************************************************

WARNING: This system is a restricted access system. All activity on this system is subject to monitoring. If information collected reveals possible criminal activity or activity that exceeds privileges, evidence of such activity may be providedto the relevant authorities for further action.

By continuing past this point, you expressly consent to this monitoring.- ZOHO Corporation

**************************************************************************************************************************

**************************************************************************************************************************

WARNING: This system is a restricted access system. All activity on this system is subject to monitoring. If information collected reveals possible criminal activity or activity that exceeds privileges, evidence of such activity may be providedto the relevant authorities for further action.

By continuing past this point, you expressly consent to this monitoring.- ZOHO Corporation

**************************************************************************************************************************

[cephnode3][DEBUG ] connected to host: cephnode3

[cephnode3][DEBUG ] detect platform information from remote host

[cephnode3][DEBUG ] detect machine type

[cephnode3][DEBUG ] find the location of an executable

[ceph_deploy.osd][INFO ] Distro info: CentOS Linux 7.3.1611 Core

[ceph_deploy.osd][DEBUG ] Deploying osd to cephnode3

[cephnode3][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf

[cephnode3][WARNIN] osd keyring does not exist yet, creating one

[cephnode3][DEBUG ] create a keyring file

[ceph_deploy.osd][DEBUG ] Preparing host cephnode3 disk /home/data/osd3 journal None activate False

[cephnode3][DEBUG ] find the location of an executable

[cephnode3][INFO ] Running command: /usr/sbin/ceph-disk -v prepare --cluster ceph --fs-type xfs -- /home/data/osd3

[cephnode3][WARNIN] command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid

[cephnode3][WARNIN] command: Running command: /usr/bin/ceph-osd --check-allows-journal -i 0 --cluster ceph

[cephnode3][WARNIN] command: Running command: /usr/bin/ceph-osd --check-wants-journal -i 0 --cluster ceph

[cephnode3][WARNIN] command: Running command: /usr/bin/ceph-osd --check-needs-journal -i 0 --cluster ceph

[cephnode3][WARNIN] command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size

[cephnode3][WARNIN] populate_data_path: Preparing osd data dir /home/data/osd3

[cephnode3][WARNIN] command: Running command: /usr/sbin/restorecon -R /home/data/osd3/ceph_fsid.3228.tmp

[cephnode3][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /home/data/osd3/ceph_fsid.3228.tmp

[cephnode3][WARNIN] command: Running command: /usr/sbin/restorecon -R /home/data/osd3/fsid.3228.tmp

[cephnode3][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /home/data/osd3/fsid.3228.tmp

[cephnode3][WARNIN] command: Running command: /usr/sbin/restorecon -R /home/data/osd3/magic.3228.tmp

[cephnode3][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /home/data/osd3/magic.3228.tmp

[cephnode3][INFO ] checking OSD status...

[cephnode3][DEBUG ] find the location of an executable

[cephnode3][INFO ] Running command: /bin/ceph --cluster=ceph osd stat --format=json

[ceph_deploy.osd][DEBUG ] Host cephnode3 is now ready for osd use.




Regards
Prabu GJ


---- On Wed, 01 Mar 2017 19:26:38 +0530 Iban Cabrillo <cabrillo@xxxxxxxxxxxxxx> wrote ----


Hi,
Are you sure ceph-disk is installed on target machine?


Regards, I


El mié., 1 mar. 2017 14:38, gjprabu <gjprabu@xxxxxxxxxxxx> escribió:


Hi All,

             Anybody faced similar issue and is there any solution on this.

Regards
Prabu GJ


---- On Wed, 01 Mar 2017 14:21:14 +0530 gjprabu <gjprabu@xxxxxxxxxxxx> wrote ----

Hi Team,

     

   We are installing new ceph setup version jewel and while active tehe osd its throughing error RuntimeError: Failed to execute command: /usr/sbin/ceph-disk -v activate --mark-init systemd --mount /home/data/osd1.  We try to reinstall the osd machine and still same error . Kindly let us know is there any solution on this error.

root@cephadmin~/mycluster#ceph-deploy osd activate cephnode1:/home/data/osd1 cephnode2:/home/data/osd2 cephnode3:/home/data/osd3

[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf

[ceph_deploy.cli][INFO  ] Invoked (1.5.37): /usr/bin/ceph-deploy osd activate cephnode1:/home/data/osd1 cephnode2:/home/data/osd2 cephnode3:/home/data/osd3

[ceph_deploy.cli][INFO  ] ceph-deploy options:

[ceph_deploy.cli][INFO  ]  username                      : None

[ceph_deploy.cli][INFO  ]  verbose                       : False

[ceph_deploy.cli][INFO  ]  overwrite_conf                : False

[ceph_deploy.cli][INFO  ]  subcommand                    : activate

[ceph_deploy.cli][INFO  ]  quiet                         : False

[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0xbcc7a0>

[ceph_deploy.cli][INFO  ]  cluster                       : ceph

[ceph_deploy.cli][INFO  ]  func                          : <function osd at 0xbbc050>

[ceph_deploy.cli][INFO  ]  ceph_conf                     : None

[ceph_deploy.cli][INFO  ]  default_release               : False

[ceph_deploy.cli][INFO  ]  disk                          : [('cephnode1', '/home/data/osd1', None), ('cephnode2', '/home/data/osd2', None), ('cephnode3', '/home/data/osd3', None)]

[ceph_deploy.osd][DEBUG ] Activating cluster ceph disks cephnode1:/home/data/osd1: cephnode2:/home/data/osd2: cephnode3:/home/data/osd3:

**************************************************************************************************************************

WARNING: This system is a restricted access system.  All activity on this system is subject to monitoring.  If information collected reveals possible criminal activity or activity that exceeds privileges, evidence of such activity may be providedto the relevant authorities for further action. 

By continuing past this point, you expressly consent to   this monitoring.- ZOHO Corporation

**************************************************************************************************************************

**************************************************************************************************************************

WARNING: This system is a restricted access system.  All activity on this system is subject to monitoring.  If information collected reveals possible criminal activity or activity that exceeds privileges, evidence of such activity may be providedto the relevant authorities for further action. 

By continuing past this point, you expressly consent to   this monitoring.- ZOHO Corporation

**************************************************************************************************************************

[cephnode1][DEBUG ] connected to host: cephnode1

[cephnode1][DEBUG ] detect platform information from remote host

[cephnode1][DEBUG ] detect machine type

[cephnode1][DEBUG ] find the location of an executable

[ceph_deploy.osd][INFO  ] Distro info: CentOS Linux 7.3.1611 Core

[ceph_deploy.osd][DEBUG ] activating host cephnode1 disk /home/data/osd1

[ceph_deploy.osd][DEBUG ] will use init type: systemd

[cephnode1][DEBUG ] find the location of an executable

[cephnode1][INFO  ] Running command: /usr/sbin/ceph-disk -v activate --mark-init systemd --mount /home/data/osd1

[cephnode1][WARNIN] main_activate: path = /home/data/osd1

[cephnode1][WARNIN] activate: Cluster uuid is 228e2b14-a6f2-4a46-b99e-673e3cd6774f

[cephnode1][WARNIN] command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid

[cephnode1][WARNIN] activate: Cluster name is ceph

[cephnode1][WARNIN] activate: OSD uuid is 147347cb-cc6b-400d-9a72-abae8cc75207

[cephnode1][WARNIN] allocate_osd_id: Allocating OSD id...

[cephnode1][WARNIN] command: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring osd create --concise 147347cb-cc6b-400d-9a72-abae8cc75207

[cephnode1][WARNIN] command: Running command: /usr/sbin/restorecon -R /home/data/osd1/whoami.3203.tmp

[cephnode1][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /home/data/osd1/whoami.3203.tmp

[cephnode1][WARNIN] activate: OSD id is 0

[cephnode1][WARNIN] activate: Initializing OSD...

[cephnode1][WARNIN] command_check_call: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /home/data/osd1/activate.monmap

[cephnode1][WARNIN] got monmap epoch 1

[cephnode1][WARNIN] command: Running command: /usr/bin/timeout 300 ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /home/data/osd1/activate.monmap --osd-data /home/data/osd1 --osd-journal /home/data/osd1/journal --osd-uuid 147347cb-cc6b-400d-9a72-abae8cc75207 --keyring /home/data/osd1/keyring --setuser ceph --setgroup ceph

[cephnode1][WARNIN] activate: Marking with init system systemd

[cephnode1][WARNIN] command: Running command: /usr/sbin/restorecon -R /home/data/osd1/systemd

[cephnode1][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /home/data/osd1/systemd

[cephnode1][WARNIN] activate: Authorizing OSD key...

[cephnode1][WARNIN] command_check_call: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring auth add osd.0 -i /home/data/osd1/keyring osd allow * mon allow profile osd

[cephnode1][WARNIN] added key for osd.0

[cephnode1][WARNIN] command: Running command: /usr/sbin/restorecon -R /home/data/osd1/active.3203.tmp

[cephnode1][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /home/data/osd1/active.3203.tmp

[cephnode1][WARNIN] activate: ceph osd.0 data dir is ready at /home/data/osd1

[cephnode1][WARNIN] activate_dir: Creating symlink /var/lib/ceph/osd/ceph-0 -> /home/data/osd1

[cephnode1][WARNIN] start_daemon: Starting ceph osd.0...

[cephnode1][WARNIN] command_check_call: Running command: /usr/bin/systemctl enable ceph-osd@0

[cephnode1][WARNIN] Created symlink from /etc/systemd/system/ceph-osd.target.wants/ceph-osd@0.service to /usr/lib/systemd/system/ceph-osd@.service.

[cephnode1][WARNIN] command_check_call: Running command: /usr/bin/systemctl start ceph-osd@0

[cephnode1][WARNIN] Job for ceph-osd@0.service failed because the control process exited with error code. See "systemctl status ceph-osd@0.service" and "journalctl -xe" for details.

[cephnode1][WARNIN] Traceback (most recent call last):

[cephnode1][WARNIN]   File "/usr/sbin/ceph-disk", line 9, in <module>

[cephnode1][WARNIN]     load_entry_point('ceph-disk==1.0.0', 'console_scripts', 'ceph-disk')()

[cephnode1][WARNIN]   File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 5009, in run

[cephnode1][WARNIN]     main(sys.argv[1:])

[cephnode1][WARNIN]   File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 4960, in main

[cephnode1][WARNIN]     args.func(args)

[cephnode1][WARNIN]   File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 3359, in main_activate

[cephnode1][WARNIN]     osd_id=osd_id,

[cephnode1][WARNIN]   File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 2906, in start_daemon

[cephnode1][WARNIN]     raise Error('ceph osd start failed', e)

[cephnode1][WARNIN] ceph_disk.main.Error

[cephnode1][ERROR ] RuntimeError: command returned non-zero exit status: 1

[ceph_deploy][ERROR ] RuntimeError: Failed to execute command: /usr/sbin/ceph-disk -v activate --mark-init systemd --mount /home/data/osd1



Regards
Prabu GJ
_______________________________________________
ceph-users mailing list
_______________________________________________
ceph-users mailing list

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux