Re: ceph-disk: Error: No cluster conf found in /etc/ceph with fsid

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I read this article more than three times.  Everytime I retried I follow the instruction purge/purgedata. I even reinstalled all the vm twice and  I still stop in the same step. 


-----Original Message-----
From: Christian Balzer [mailto:chibi@xxxxxxx] 
Sent: Wednesday, May 25, 2016 11:18 PM
To: ceph-users@xxxxxxxxxxxxxx
Cc: Albert.K.Chong (git.usca07.Newegg) 22201
Subject: Re:  ceph-disk: Error: No cluster conf found in /etc/ceph with fsid


Hello,

if you google the EXACT subject of your mail you will find several threads about this, the first one most likely exactly what you're seeing (having a not fully cleaned/purged install leftover).

http://lists.ceph.com/pipermail/ceph-users-ceph.com/2014-May/040128.html

Christian

On Wed, 25 May 2016 22:03:49 +0000 Albert.K.Chong (git.usca07.Newegg)
22201 wrote:

> Hi,
> 
> I follow storage cluster Quick start instruction in my centos 7 more 
> than 10 times including complete cleaning and reinstallation.  I 
> failed in the same step every time. "ceph-deploy osd activate ..."  
> The last try I just create disk in the local drive to avoid some 
> permission warning and run "ceph-deploy osd prepare .. and
> 
> [albert@admin-node my-cluster]$ ceph-deploy osd activate
> admin-node:/home/albert/my-cluster/cephd2 [ceph_deploy.conf][DEBUG ] 
> found configuration file at: /home/albert/.cephdeploy.conf 
> [ceph_deploy.cli][INFO  ] Invoked (1.5.33): /usr/bin/ceph-deploy osd 
> activate admin-node:/home/albert/my-cluster/cephd2
> [ceph_deploy.cli][INFO  ] ceph-deploy options: [ceph_deploy.cli][INFO
> ]  username                      : None [ceph_deploy.cli][INFO  ]
> verbose                       : False [ceph_deploy.cli][INFO  ]
> overwrite_conf                : False [ceph_deploy.cli][INFO  ]
> subcommand                    : activate [ceph_deploy.cli][INFO  ]
> quiet                         : False [ceph_deploy.cli][INFO  ]
> cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf
> instance at 0xe82518> [ceph_deploy.cli][INFO  ]
> cluster                       : ceph [ceph_deploy.cli][INFO  ]
> func                          : <function osd at 0xe75c08>
> [ceph_deploy.cli][INFO  ]  ceph_conf                     : None
> [ceph_deploy.cli][INFO  ]  default_release               : False
> [ceph_deploy.cli][INFO  ]  disk                          :
> [('admin-node', '/home/albert/my-cluster/cephd2', None)] 
> [ceph_deploy.osd][DEBUG ] Activating cluster ceph disks
> admin-node:/home/albert/my-cluster/cephd2: [admin-node][DEBUG ] 
> connection detected need for sudo [admin-node][DEBUG ] connected to
> host: admin-node [admin-node][DEBUG ] detect platform information from 
> remote host [admin-node][DEBUG ] detect machine type 
> [admin-node][DEBUG ] find the location of an executable 
> [ceph_deploy.osd][INFO  ] Distro
> info: CentOS Linux 7.2.1511 Core [ceph_deploy.osd][DEBUG ] activating 
> host admin-node disk /home/albert/my-cluster/cephd2 
> [ceph_deploy.osd][DEBUG ] will use init type: systemd 
> [admin-node][DEBUG ] find the location of an executable 
> [admin-node][INFO  ] Running
> command: sudo /usr/sbin/ceph-disk -v activate --mark-init systemd 
> --mount /home/albert/my-cluster/cephd2 [admin-node][WARNIN]
> main_activate: path = /home/albert/my-cluster/cephd2 
> [admin-node][WARNIN] activate: Cluster uuid is 
> 8f9bf207-6c6a-4764-8b9e-63f70810837b [admin-node][WARNIN] command:
> Running command: /usr/bin/ceph-osd --cluster=ceph 
> --show-config-value=fsid [admin-node][WARNIN] Traceback (most recent
> call last): [admin-node][WARNIN]   File "/usr/sbin/ceph-disk", line 9,
> in <module> [admin-node][WARNIN]
> load_entry_point('ceph-disk==1.0.0', 'console_scripts', 'ceph-disk')()
> [admin-node][WARNIN]   File
> "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 4964, in run
> [admin-node][WARNIN]     main(sys.argv[1:]) [admin-node][WARNIN]   File
> "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 4915, in main
> [admin-node][WARNIN]     args.func(args) [admin-node][WARNIN]   File
> "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 3277, in
> main_activate [admin-node][WARNIN]     init=args.mark_init,
> [admin-node][WARNIN]   File
> "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 3097, in
> activate_dir [admin-node][WARNIN]     (osd_id, cluster) = activate(path,
> activate_key_template, init) [admin-node][WARNIN]   File
> "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 3173, in
> activate [admin-node][WARNIN]     ' with fsid %s' % ceph_fsid)
> [admin-node][WARNIN] ceph_disk.main.Error: Error: No cluster conf 
> found in /etc/ceph with fsid 8f9bf207-6c6a-4764-8b9e-63f70810837b
> [admin-node][ERROR ] RuntimeError: command returned non-zero exit
> status: 1 [ceph_deploy][ERROR ] RuntimeError: Failed to execute
> command: /usr/sbin/ceph-disk -v activate --mark-init systemd --mount 
> /home/albert/my-cluster/cephd2
> 
> 
> Need some help.  Really appreciated.
> 
> 
> Albert


-- 
Christian Balzer        Network/Systems Engineer                
chibi@xxxxxxx   	Global OnLine Japan/Rakuten Communications
http://www.gol.com/
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux