ceph-disk: Error: No cluster conf found in /etc/ceph with fsid

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, May 26, 2014 at 5:22 AM, JinHwan Hwang <calanchue at gmail.com> wrote:
> I'm trying to install ceph 0.80.1 on ubuntu 14.04. All other things goes
> well except 'activate osd' phase. It tells me they can't find proper fsid
> when i do 'activate osd'. This is not my first time of installing ceph, and
> all the process i did was ok when i did on other(though they were ubuntu
> 12.04 , virtual machines, ceph-emperor)
>
> ceph at ceph-mon:~$ ceph-deploy osd activate ceph-osd0:/dev/sdb1
> ceph-osd0:/dev/sdc1 ceph-osd1:/dev/sdb1 ceph-osd1:/dev/sdc1
> ...
> [ceph-osd0][WARNIN] ceph-disk: Error: No cluster conf found in /etc/ceph
> with fsid 05b994a0-20f9-48d7-8d34-107ffcb39e5b

It seems to me that this is stale daemons left behind. Are you sure
you ran both purge + purgedata on all
your nodes before trying it again?

If you did, make sure there are no Ceph processes behind. We bring up,
tear down, and bring up again clusters on the same
machines all the time and have not been able to replicate this
behavior (will have to try on 14.04 though)

Can you paste the full logs (from purge+purgedata through install and
mon create and osd?) so I can take a look at them?



> ..
>
> One weird thing is that, every time while i install and uninstall ceph, fsid
> value of /etc/ceph/ceph.conf changes, but those
> '05b994a0-20f9-48d7-8d34-107ffcb39e5b' doesn't change. It looks like that
> some values retains even though i uninstall ceph. I did purge,purgedata,
> forget key, zap and deleting partition with fdisk every time i uninstall
> ceph.


>
> I had changed those fsid value in ceph.conf and distributed it. But that
> attempt also reads me to a dead end. After the second attempt, they are
> complaining about ondisk 'ondisk fsid 00000000-0000-0000-0000-000000000000
> doesn't match expected cc04e573-7a4d-42e7-9268-3a93f1052aee, invalid
> (someone else's?) journal'. So I looked into the disks with fdisk, And i
> found some weird message was printed on top of table.
>
> WARNING: GPT (GUID Partition Table) detected on '/dev/sdc'! The util fdisk
> doesn't support GPT. Use GNU Parted.
>
> Those are what i haven't seen before when i was on ceph-emperor.
>
> Current ceph version(firefly) doesn't support ubuntu 14.04? or Am i missing
> some setup?
> Thanks in advance for any helps.
>
>
>
>
> First attempt.
>
> ceph at ceph-mon:~$ ceph-deploy osd activate ceph-osd0:/dev/sdb1
> ceph-osd0:/dev/sdc1 ceph-osd1:/dev/sdb1 ceph-osd1:/dev/sdc1
> [ceph_deploy.cli][INFO  ] Invoked (1.4.0): /usr/bin/ceph-deploy osd activate
> ceph-osd0:/dev/sdb1 ceph-osd0:/dev/sdc1 ceph-osd1:/dev/sdb1
> ceph-osd1:/dev/sdc1
> [ceph_deploy.osd][DEBUG ] Activating cluster ceph disks ceph-osd0:/dev/sdb1:
> ceph-osd0:/dev/sdc1: ceph-osd1:/dev/sdb1: ceph-osd1:/dev/sdc1:
> [ceph-osd0][DEBUG ] connected to host: ceph-osd0
> [ceph-osd0][DEBUG ] detect platform information from remote host
> [ceph-osd0][DEBUG ] detect machine type
> [ceph_deploy.osd][INFO  ] Distro info: Ubuntu 14.04 trusty
> [ceph_deploy.osd][DEBUG ] activating host ceph-osd0 disk /dev/sdb1
> [ceph_deploy.osd][DEBUG ] will use init type: upstart
> [ceph-osd0][INFO  ] Running command: sudo ceph-disk-activate --mark-init
> upstart --mount /dev/sdb1
> [ceph-osd0][WARNIN] ceph-disk: Error: No cluster conf found in /etc/ceph
> with fsid 05b994a0-20f9-48d7-8d34-107ffcb39e5b
> [ceph-osd0][ERROR ] RuntimeError: command returned non-zero exit status: 1
> [ceph_deploy][ERROR ] RuntimeError: Failed to execute command:
> ceph-disk-activate --mark-init upstart --mount /dev/sdb1
>
> Second attempt
>
> ceph at ceph-mon:~$ ceph-deploy osd activate ceph-osd0:/dev/sdb1
> ceph-osd0:/dev/sdc1 ceph-osd1:/dev/sdb1 ceph-osd1:/dev/sdc1
> [ceph_deploy.cli][INFO  ] Invoked (1.4.0): /usr/bin/ceph-deploy osd activate
> ceph-osd0:/dev/sdb1 ceph-osd0:/dev/sdc1 ceph-osd1:/dev/sdb1
> ceph-osd1:/dev/sdc1
> [ceph_deploy.osd][DEBUG ] Activating cluster ceph disks ceph-osd0:/dev/sdb1:
> ceph-osd0:/dev/sdc1: ceph-osd1:/dev/sdb1: ceph-osd1:/dev/sdc1:
> [ceph-osd0][DEBUG ] connected to host: ceph-osd0
> [ceph-osd0][DEBUG ] detect platform information from remote host
> [ceph-osd0][DEBUG ] detect machine type
> [ceph_deploy.osd][INFO  ] Distro info: Ubuntu 14.04 trusty
> [ceph_deploy.osd][DEBUG ] activating host ceph-osd0 disk /dev/sdb1
> [ceph_deploy.osd][DEBUG ] will use init type: upstart
> [ceph-osd0][INFO  ] Running command: sudo ceph-disk-activate --mark-init
> upstart --mount /dev/sdb1
> [ceph-osd0][WARNIN] got monmap epoch 1
> [ceph-osd0][WARNIN] 2014-05-26 17:55:06.550365 7fd5ec9c8800 -1 journal
> FileJournal::_open: disabling aio for non-block journal.  Use
> journal_force_aio to force use of aio anyway
> [ceph-osd0][WARNIN] 2014-05-26 17:55:06.550400 7fd5ec9c8800 -1 journal
> check: ondisk fsid 00000000-0000-0000-0000-000000000000 doesn't match
> expected cc04e573-7a4d-42e7-9268-3a93f1052aee, invalid (someone else's?)
> journal
> [ceph-osd0][WARNIN] 2014-05-26 17:55:06.550422 7fd5ec9c8800 -1
> filestore(/dev/sdb1) mkjournal error creating journal on /dev/sdb1/journal:
> (22) Invalid argument
> [ceph-osd0][WARNIN] 2014-05-26 17:55:06.550438 7fd5ec9c8800 -1 OSD::mkfs:
> ObjectStore::mkfs failed with error -22
> [ceph-osd0][WARNIN] 2014-05-26 17:55:06.550466 7fd5ec9c8800 -1  ** ERROR:
> error creating empty object store in /dev/sdb1: (22) Invalid argument
> [ceph-osd0][WARNIN] Traceback (most recent call last):
> [ceph-osd0][WARNIN]   File "/usr/sbin/ceph-disk", line 2579, in <module>
> [ceph-osd0][WARNIN]     main()
> [ceph-osd0][WARNIN]   File "/usr/sbin/ceph-disk", line 2557, in main
> [ceph-osd0][WARNIN]     args.func(args)
> [ceph-osd0][WARNIN]   File "/usr/sbin/ceph-disk", line 1917, in
> main_activate
> [ceph-osd0][WARNIN]     init=args.mark_init,
> [ceph-osd0][WARNIN]   File "/usr/sbin/ceph-disk", line 1749, in activate_dir
> [ceph-osd0][WARNIN]     (osd_id, cluster) = activate(path,
> activate_key_template, init)
> [ceph-osd0][WARNIN]   File "/usr/sbin/ceph-disk", line 1849, in activate
> [ceph-osd0][WARNIN]     keyring=keyring,
> [ceph-osd0][WARNIN]   File "/usr/sbin/ceph-disk", line 1484, in mkfs
> [ceph-osd0][WARNIN]     '--keyring', os.path.join(path, 'keyring'),
> [ceph-osd0][WARNIN]   File "/usr/sbin/ceph-disk", line 303, in
> command_check_call
> [ceph-osd0][WARNIN]     return subprocess.check_call(arguments)
> [ceph-osd0][WARNIN]   File "/usr/lib/python2.7/subprocess.py", line 540, in
> check_call
> [ceph-osd0][WARNIN]     raise CalledProcessError(retcode, cmd)
> [ceph-osd0][WARNIN] subprocess.CalledProcessError: Command
> '['/usr/bin/ceph-osd', '--cluster', 'ceph', '--mkfs', '--mkkey', '-i', '0',
> '--monmap', '/dev/sdb1/activate.monmap', '--osd-data', '/dev/sdb1',
> '--osd-journal', '/dev/sdb1/journal', '--osd-uuid',
> 'cc04e573-7a4d-42e7-9268-3a93f1052aee', '--keyring', '/dev/sdb1/keyring']'
> returned non-zero exit status 1
> [ceph-osd0][ERROR ] RuntimeError: command returned non-zero exit status: 1
> [ceph_deploy][ERROR ] RuntimeError: Failed to execute command:
> ceph-disk-activate --mark-init upstart --mount /dev/sdb1
>
>
>
>
>
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users at lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux