Re: ceph-disk: Error: No cluster conf found in /etc/ceph with fsid

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



This time I really  clean up everything with purge/purgedata and ensured no warning message.  Go over the quick start guild again..

Still failed in the same step but sounds like related to permission as below. 

[node2][WARNIN] command_check_call: Running command: /usr/bin/ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /var/local/osd0/activate.monmap --osd-data /var/local/osd0 --osd-journal /var/local/osd0/journal --osd-uuid fe67c319-279d-4cbd-9ebe-30e6cbc88010 --keyring /var/local/osd0/keyring --setuser ceph --setgroup ceph
[node2][WARNIN] 2016-05-26 20:45:27.499999 7f7eadf50800 -1 filestore(/var/local/osd0) mkfs: write_version_stamp() failed: (13) Permission denied
[node2][WARNIN] 2016-05-26 20:45:27.500015 7f7eadf50800 -1 OSD::mkfs: ObjectStore::mkfs failed with error -13
.
.
[node2][ERROR ] RuntimeError: command returned non-zero exit status: 1
[ceph_deploy][ERROR ] RuntimeError: Failed to execute command: /usr/sbin/ceph-disk -v activate --mark-init systemd --mount /var/local/osd0


Here is more details..

[albert@admin-node my-cluster]$ ceph-deploy osd activate node2:/var/local/osd0 node4:/var/local/osd1
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/albert/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.33): /usr/bin/ceph-deploy osd activate node2:/var/local/osd0 node4:/var/local/osd1
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : activate
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x2402518>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  func                          : <function osd at 0x23f5c08>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  disk                          : [('node2', '/var/local/osd0', None), ('node4', '/var/local/osd1', None)]
[ceph_deploy.osd][DEBUG ] Activating cluster ceph disks node2:/var/local/osd0: node4:/var/local/osd1:
[node2][DEBUG ] connection detected need for sudo
[node2][DEBUG ] connected to host: node2
[node2][DEBUG ] detect platform information from remote host
[node2][DEBUG ] detect machine type
[node2][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO  ] Distro info: CentOS Linux 7.2.1511 Core
[ceph_deploy.osd][DEBUG ] activating host node2 disk /var/local/osd0
[ceph_deploy.osd][DEBUG ] will use init type: systemd
[node2][DEBUG ] find the location of an executable
[node2][INFO  ] Running command: sudo /usr/sbin/ceph-disk -v activate --mark-init systemd --mount /var/local/osd0
[node2][WARNIN] main_activate: path = /var/local/osd0
[node2][WARNIN] activate: Cluster uuid is b8d12d74-3366-4f57-b8df-6e86e795508d
[node2][WARNIN] command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
[node2][WARNIN] activate: Cluster name is ceph
[node2][WARNIN] activate: OSD uuid is fe67c319-279d-4cbd-9ebe-30e6cbc88010
[node2][WARNIN] activate: OSD id is 0
[node2][WARNIN] activate: Initializing OSD...
[node2][WARNIN] command_check_call: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/local/osd0/activate.monmap
[node2][WARNIN] got monmap epoch 1
[node2][WARNIN] command_check_call: Running command: /usr/bin/ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /var/local/osd0/activate.monmap --osd-data /var/local/osd0 --osd-journal /var/local/osd0/journal --osd-uuid fe67c319-279d-4cbd-9ebe-30e6cbc88010 --keyring /var/local/osd0/keyring --setuser ceph --setgroup ceph
[node2][WARNIN] 2016-05-26 20:45:27.499999 7f7eadf50800 -1 filestore(/var/local/osd0) mkfs: write_version_stamp() failed: (13) Permission denied
[node2][WARNIN] 2016-05-26 20:45:27.500015 7f7eadf50800 -1 OSD::mkfs: ObjectStore::mkfs failed with error -13
[node2][WARNIN] 2016-05-26 20:45:27.500054 7f7eadf50800 -1  ** ERROR: error creating empty object store in /var/local/osd0: (13) Permission denied
[node2][WARNIN] Traceback (most recent call last):
[node2][WARNIN]   File "/usr/sbin/ceph-disk", line 9, in <module>
[node2][WARNIN]     load_entry_point('ceph-disk==1.0.0', 'console_scripts', 'ceph-disk')()
[node2][WARNIN]   File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 4964, in run
[node2][WARNIN]     main(sys.argv[1:])
[node2][WARNIN]   File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 4915, in main
[node2][WARNIN]     args.func(args)
[node2][WARNIN]   File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 3277, in main_activate
[node2][WARNIN]     init=args.mark_init,
[node2][WARNIN]   File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 3097, in activate_dir
[node2][WARNIN]     (osd_id, cluster) = activate(path, activate_key_template, init)
[node2][WARNIN]   File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 3202, in activate
[node2][WARNIN]     keyring=keyring,
[node2][WARNIN]   File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 2695, in mkfs
[node2][WARNIN]     '--setgroup', get_ceph_group(),
[node2][WARNIN]   File "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 439, in command_check_call
[node2][WARNIN]     return subprocess.check_call(arguments)
[node2][WARNIN]   File "/usr/lib64/python2.7/subprocess.py", line 542, in check_call
[node2][WARNIN]     raise CalledProcessError(retcode, cmd)
[node2][WARNIN] subprocess.CalledProcessError: Command '['/usr/bin/ceph-osd', '--cluster', 'ceph', '--mkfs', '--mkkey', '-i', '0', '--monmap', '/var/local/osd0/activate.monmap', '--osd-data', '/var/local/osd0', '--osd-journal', '/var/local/osd0/journal', '--osd-uuid', 'fe67c319-279d-4cbd-9ebe-30e6cbc88010', '--keyring', '/var/local/osd0/keyring', '--setuser', 'ceph', '--setgroup', 'ceph']' returned non-zero exit status 1
[node2][ERROR ] RuntimeError: command returned non-zero exit status: 1
[ceph_deploy][ERROR ] RuntimeError: Failed to execute command: /usr/sbin/ceph-disk -v activate --mark-init systemd --mount /var/local/osd0

[albert@admin-node my-cluster]$


-----Original Message-----
From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of Fulvio Galeazzi
Sent: Thursday, May 26, 2016 9:17 AM
To: ceph-users@xxxxxxxxxxxxxx
Subject: Re:  ceph-disk: Error: No cluster conf found in /etc/ceph with fsid

Hallo,
	as I spent the whole afternoon on a similar issue...  :-)

   Run purge (will also remove ceph packages, I am assuming you don't care much about the existing stuff),

on all nodes mon/osd/admin remove
   rm -rf /var/lib/ceph/

on OSD nodes make sure you mount all partitions, then remove
   rm -rf /srv/node/<whatever>*/*
   chown -R ceph.ceph /srv/node/<whatever>*/

on admin node, remove, from the cluster-administration-directory,
	rm ceph.bootstrap* ceph*keyring
and only leave the old ceph.conf which you will probably susbstitute with the default one after step 1 (assuming, for example, you either spent some time playing with it and/or you want to force a specific cephfs-id).

   Good luck

			Fulvio

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux