Question about OSD activate with ceph-deploy

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi all,

I'm having some issues while trying to run the osd activate command with ceph-deploy tool (1.5.28), the osd prepare command run fine, but then...

osd: sdf1
journal: /dev/sdc1


$ ceph-deploy osd activate cibn01:sdf1:/dev/sdc1
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/ceph/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.28): /usr/local/bin/ceph-deploy osd activate cibn01:sdf1:/dev/sdc1
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : activate
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7fdcf9f378c0>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  func                          : <function osd at 0x7fdcfa196320>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  disk                          : [('cibn01', '/dev/sdf1', '/dev/sdc1')]
[ceph_deploy.osd][DEBUG ] Activating cluster ceph disks cibn01:/dev/sdf1:/dev/sdc1
ceph@cibn01's password:
[cibn01][DEBUG ] connection detected need for sudo
ceph@cibn01's password:
[cibn01][DEBUG ] connected to host: cibn01
[cibn01][DEBUG ] detect platform information from remote host
[cibn01][DEBUG ] detect machine type
[cibn01][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO  ] Distro info: Ubuntu 14.04 trusty
[ceph_deploy.osd][DEBUG ] activating host cibn01 disk /dev/sdf1
[ceph_deploy.osd][DEBUG ] will use init type: upstart
[cibn01][INFO  ] Running command: sudo ceph-disk -v activate --mark-init upstart --mount /dev/sdf1
[cibn01][WARNIN] DEBUG:ceph-disk:get_dm_uuid /dev/sdf1 uuid path is /sys/dev/block/8:81/dm/uuid
[cibn01][WARNIN] DEBUG:ceph-disk:get_dm_uuid /dev/sdf1 uuid path is /sys/dev/block/8:81/dm/uuid
[cibn01][WARNIN] INFO:ceph-disk:Running command: /sbin/sgdisk -i 1 /dev/sdf
[cibn01][WARNIN] INFO:ceph-disk:Running command: /sbin/blkid -p -s TYPE -ovalue -- /dev/sdf1
[cibn01][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_btrfs
[cibn01][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_btrfs
[cibn01][WARNIN] DEBUG:ceph-disk:Mounting /dev/sdf1 on /var/lib/ceph/tmp/mnt.zv_wAh with options noatime,user_subvol_rm_allowed
[cibn01][WARNIN] INFO:ceph-disk:Running command: /bin/mount -t btrfs -o noatime,user_subvol_rm_allowed -- /dev/sdf1 /var/lib/ceph/tmp/mnt.zv_wAh
[cibn01][WARNIN] DEBUG:ceph-disk:Cluster uuid is 1661668a-bc97-419f-9000-6fb23f364479
[cibn01][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
[cibn01][WARNIN] DEBUG:ceph-disk:Cluster name is ceph
[cibn01][WARNIN] DEBUG:ceph-disk:OSD uuid is xxxx-xxxx-xxxxx-xxxx-xxxxx
[cibn01][WARNIN] DEBUG:ceph-disk:OSD id is 0
[cibn01][WARNIN] DEBUG:ceph-disk:Initializing OSD...
[cibn01][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/tmp/mnt.zv_wAh/activate.monmap
[cibn01][WARNIN] got monmap epoch 2
[cibn01][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster ceph --mkfs --mkkey -i 0 --monmap /var/lib/ceph/tmp/mnt.zv_wAh/activate.monmap --osd-data /var/lib/ceph/tmp/mnt.zv_wAh --osd-journal /var/lib/ceph/tmp/mnt.zv_wAh/journal --osd-uuid 81d60fe0-3b40-4045-8674-7f4723c6a67a --keyring /var/lib/ceph/tmp/mnt.zv_wAh/keyring --setuser ceph --setgroup ceph
[cibn01][WARNIN] 2015-11-13 09:34:35.903047 7f2be4e3a940 -1 filestore(/var/lib/ceph/tmp/mnt.zv_wAh) mkjournal error creating journal on /var/lib/ceph/tmp/mnt.zv_wAh/journal: (13) Permission denied
[cibn01][WARNIN] 2015-11-13 09:34:35.903059 7f2be4e3a940 -1 OSD::mkfs: ObjectStore::mkfs failed with error -13
[cibn01][WARNIN] 2015-11-13 09:34:35.903080 7f2be4e3a940 -1  ** ERROR: error creating empty object store in /var/lib/ceph/tmp/mnt.zv_wAh: (13) Permission denied
[cibn01][WARNIN] ERROR:ceph-disk:Failed to activate
[cibn01][WARNIN] DEBUG:ceph-disk:Unmounting /var/lib/ceph/tmp/mnt.zv_wAh
[cibn01][WARNIN] INFO:ceph-disk:Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.zv_wAh
[cibn01][WARNIN] Traceback (most recent call last):
[cibn01][WARNIN]   File "/usr/sbin/ceph-disk", line 3576, in <module>
[cibn01][WARNIN]     main(sys.argv[1:])
[cibn01][WARNIN]   File "/usr/sbin/ceph-disk", line 3530, in main
[cibn01][WARNIN]     args.func(args)
[cibn01][WARNIN]   File "/usr/sbin/ceph-disk", line 2424, in main_activate
[cibn01][WARNIN]     dmcrypt_key_dir=args.dmcrypt_key_dir,
[cibn01][WARNIN]   File "/usr/sbin/ceph-disk", line 2197, in mount_activate
[cibn01][WARNIN]     (osd_id, cluster) = activate(path, activate_key_template, init)
[cibn01][WARNIN]   File "/usr/sbin/ceph-disk", line 2360, in activate
[cibn01][WARNIN]     keyring=keyring,
[cibn01][WARNIN]   File "/usr/sbin/ceph-disk", line 1950, in mkfs
[cibn01][WARNIN]     '--setgroup', get_ceph_user(),
[cibn01][WARNIN]   File "/usr/sbin/ceph-disk", line 349, in command_check_call
[cibn01][WARNIN]     return subprocess.check_call(arguments)
[cibn01][WARNIN]   File "/usr/lib/python2.7/subprocess.py", line 540, in check_call
[cibn01][WARNIN]     raise CalledProcessError(retcode, cmd)
[cibn01][WARNIN] subprocess.CalledProcessError: Command '['/usr/bin/ceph-osd', '--cluster', 'ceph', '--mkfs', '--mkkey', '-i', '0', '--monmap', '/var/lib/ceph/tmp/mnt.zv_wAh/activate.monmap', '--osd-data', '/var/lib/ceph/tmp/mnt.zv_wAh', '--osd-journal', '/var/lib/ceph/tmp/mnt.zv_wAh/journal', '--osd-uuid', '
xxxx-xxxx-xxxxx-xxxx-xxxxx', '--keyring', '/var/lib/ceph/tmp/mnt.zv_wAh/keyring', '--setuser', 'ceph', '--setgroup', 'ceph']' returned non-zero exit status 1
[cibn01][ERROR ] RuntimeError: command returned non-zero exit status: 1
[ceph_deploy][ERROR ] RuntimeError: Failed to execute command: ceph-disk -v activate --mark-init upstart --mount /dev/sdf1


any ideas? I've already try doing a chown -R ceph:ceph to /var/lib/ceph, /var/run/ceph and /etc/ceph but the same error appears again.

Thanks in advance,

German
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux