Re: question about activate OSD

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi German,
if i'm right the journal-creation on /dev/sdc1 failed (perhaps because you only say /dev/sdc instead of /dev/sdc1?).

Do you have partitions on sdc?


Udo

On 31.10.2014 22:02, German Anders wrote:
Hi all,
      I'm having some issues while trying to activate a new osd in a new cluster, the prepare command run fine, but then the activate command failed:

ceph@cephbkdeploy01:~/desp-bkp-cluster$ ceph-deploy --overwrite-conf disk prepare --fs-type btrfs ceph-bkp-osd01:sdf:/dev/sdc
[ceph_deploy.cli][INFO  ] Invoked (1.4.0): /usr/bin/ceph-deploy --overwrite-conf disk prepare --fs-type btrfs ceph-bkp-osd01:sdf:/dev/sdc
[ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks ceph-bkp-osd01:/dev/sdf:/dev/sdc
[ceph-bkp-osd01][DEBUG ] connected to host: ceph-bkp-osd01
[ceph-bkp-osd01][DEBUG ] detect platform information from remote host
[ceph-bkp-osd01][DEBUG ] detect machine type
[ceph_deploy.osd][INFO  ] Distro info: Ubuntu 14.04 trusty
[ceph_deploy.osd][DEBUG ] Deploying osd to ceph-bkp-osd01
[ceph-bkp-osd01][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph-bkp-osd01][INFO  ] Running command: sudo udevadm trigger --subsystem-match=block --action=""> [ceph_deploy.osd][DEBUG ] Preparing host ceph-bkp-osd01 disk /dev/sdf journal /dev/sdc activate False
[ceph-bkp-osd01][INFO  ] Running command: sudo ceph-disk-prepare --fs-type btrfs --cluster ceph -- /dev/sdf /dev/sdc
[ceph-bkp-osd01][WARNIN] libust[13609/13609]: Warning: HOME environment variable not set. Disabling LTTng-UST per-user tracing. (in setup_local_apps() at lttng-ust-comm.c:305)
[ceph-bkp-osd01][WARNIN] libust[13627/13627]: Warning: HOME environment variable not set. Disabling LTTng-UST per-user tracing. (in setup_local_apps() at lttng-ust-comm.c:305)
[ceph-bkp-osd01][WARNIN] WARNING:ceph-disk:OSD will not be hot-swappable if journal is not the same device as the osd data
[ceph-bkp-osd01][WARNIN] Turning ON incompat feature 'extref': increased hardlink limit per file to 65536
[ceph-bkp-osd01][DEBUG ] Creating new GPT entries.
[ceph-bkp-osd01][DEBUG ] The operation has completed successfully.
[ceph-bkp-osd01][DEBUG ] Creating new GPT entries.
[ceph-bkp-osd01][DEBUG ] The operation has completed successfully.
[ceph-bkp-osd01][DEBUG ]
[ceph-bkp-osd01][DEBUG ] WARNING! - Btrfs v3.12 IS EXPERIMENTAL
[ceph-bkp-osd01][DEBUG ] WARNING! - see http://btrfs.wiki.kernel.org before using
[ceph-bkp-osd01][DEBUG ]
[ceph-bkp-osd01][DEBUG ] fs created label (null) on /dev/sdf1
[ceph-bkp-osd01][DEBUG ]     nodesize 32768 leafsize 32768 sectorsize 4096 size 2.73TiB
[ceph-bkp-osd01][DEBUG ] Btrfs v3.12
[ceph-bkp-osd01][DEBUG ] The operation has completed successfully.
[ceph_deploy.osd][DEBUG ] Host ceph-bkp-osd01 is now ready for osd use.
ceph@cephbkdeploy01:~/desp-bkp-cluster$
ceph@cephbkdeploy01:~/desp-bkp-cluster$ ceph-deploy --overwrite-conf disk activate --fs-type btrfs ceph-bkp-osd01:sdf1:/dev/sdc1
[ceph_deploy.cli][INFO  ] Invoked (1.4.0): /usr/bin/ceph-deploy --overwrite-conf disk activate --fs-type btrfs ceph-bkp-osd01:sdf1:/dev/sdc1
[ceph_deploy.osd][DEBUG ] Activating cluster ceph disks ceph-bkp-osd01:/dev/sdf1:/dev/sdc1
[ceph-bkp-osd01][DEBUG ] connected to host: ceph-bkp-osd01
[ceph-bkp-osd01][DEBUG ] detect platform information from remote host
[ceph-bkp-osd01][DEBUG ] detect machine type
[ceph_deploy.osd][INFO  ] Distro info: Ubuntu 14.04 trusty
[ceph_deploy.osd][DEBUG ] activating host ceph-bkp-osd01 disk /dev/sdf1
[ceph_deploy.osd][DEBUG ] will use init type: upstart
[ceph-bkp-osd01][INFO  ] Running command: sudo ceph-disk-activate --mark-init upstart --mount /dev/sdf1
[ceph-bkp-osd01][WARNIN] libust[14025/14025]: Warning: HOME environment variable not set. Disabling LTTng-UST per-user tracing. (in setup_local_apps() at lttng-ust-comm.c:305)
[ceph-bkp-osd01][WARNIN] libust[14028/14028]: Warning: HOME environment variable not set. Disabling LTTng-UST per-user tracing. (in setup_local_apps() at lttng-ust-comm.c:305)
[ceph-bkp-osd01][WARNIN] got monmap epoch 1
[ceph-bkp-osd01][WARNIN] libust[14059/14059]: Warning: HOME environment variable not set. Disabling LTTng-UST per-user tracing. (in setup_local_apps() at lttng-ust-comm.c:305)
[ceph-bkp-osd01][WARNIN] 2014-10-31 17:00:10.936163 7ffb41d32900 -1 journal FileJournal::_open: disabling aio for non-block journal.  Use journal_force_aio to force use of aio anyway
[ceph-bkp-osd01][WARNIN] 2014-10-31 17:00:10.936221 7ffb41d32900 -1 journal check: ondisk fsid 00000000-0000-0000-0000-000000000000 doesn't match expected 6a26ef1f-6ece-4383-8304-7a8d064ef2b4, invalid (someone else's?) journal
[ceph-bkp-osd01][WARNIN] 2014-10-31 17:00:10.936275 7ffb41d32900 -1 filestore(/var/lib/ceph/tmp/mnt.vt_waK) mkjournal error creating journal on /var/lib/ceph/tmp/mnt.vt_waK/journal: (22) Invalid argument
[ceph-bkp-osd01][WARNIN] 2014-10-31 17:00:10.936310 7ffb41d32900 -1 OSD::mkfs: ObjectStore::mkfs failed with error -22
[ceph-bkp-osd01][WARNIN] 2014-10-31 17:00:10.936389 7ffb41d32900 -1  ** ERROR: error creating empty object store in /var/lib/ceph/tmp/mnt.vt_waK: (22) Invalid argument
[ceph-bkp-osd01][WARNIN] ERROR:ceph-disk:Failed to activate
[ceph-bkp-osd01][WARNIN] Traceback (most recent call last):
[ceph-bkp-osd01][WARNIN]   File "/usr/sbin/ceph-disk", line 2792, in <module>
[ceph-bkp-osd01][WARNIN]     main()
[ceph-bkp-osd01][WARNIN]   File "/usr/sbin/ceph-disk", line 2770, in main
[ceph-bkp-osd01][WARNIN]     args.func(args)
[ceph-bkp-osd01][WARNIN]   File "/usr/sbin/ceph-disk", line 2004, in main_activate
[ceph-bkp-osd01][WARNIN]     init=args.mark_init,
[ceph-bkp-osd01][WARNIN]   File "/usr/sbin/ceph-disk", line 1778, in mount_activate
[ceph-bkp-osd01][WARNIN]     (osd_id, cluster) = activate(path, activate_key_template, init)
[ceph-bkp-osd01][WARNIN]   File "/usr/sbin/ceph-disk", line 1943, in activate
[ceph-bkp-osd01][WARNIN]     keyring=keyring,
[ceph-bkp-osd01][WARNIN]   File "/usr/sbin/ceph-disk", line 1573, in mkfs
[ceph-bkp-osd01][WARNIN]     '--keyring', os.path.join(path, 'keyring'),
[ceph-bkp-osd01][WARNIN]   File "/usr/sbin/ceph-disk", line 316, in command_check_call
[ceph-bkp-osd01][WARNIN]     return subprocess.check_call(arguments)
[ceph-bkp-osd01][WARNIN]   File "/usr/lib/python2.7/subprocess.py", line 540, in check_call
[ceph-bkp-osd01][WARNIN]     raise CalledProcessError(retcode, cmd)
[ceph-bkp-osd01][WARNIN] subprocess.CalledProcessError: Command '['/usr/bin/ceph-osd', '--cluster', 'ceph', '--mkfs', '--mkkey', '-i', '7', '--monmap', '/var/lib/ceph/tmp/mnt.vt_waK/activate.monmap', '--osd-data', '/var/lib/ceph/tmp/mnt.vt_waK', '--osd-journal', '/var/lib/ceph/tmp/mnt.vt_waK/journal', '--osd-uuid', '6a26ef1f-6ece-4383-8304-7a8d064ef2b4', '--keyring', '/var/lib/ceph/tmp/mnt.vt_waK/keyring']' returned non-zero exit status 1
[ceph-bkp-osd01][ERROR ] RuntimeError: command returned non-zero exit status: 1
[ceph_deploy][ERROR ] RuntimeError: Failed to execute command: ceph-disk-activate --mark-init upstart --mount /dev/sdf1

ceph@cephbkdeploy01:~/desp-bkp-cluster$


I'm using Ubuntu 14.04 LTS with kernel 3.13.0-24-generic and Ceph version 0.87 (dev)

Any ideas?

Thanks in advance,

Best regards,

 
 

German Anders
















 


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux