Error adding OSD

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi all,

I am trying to add an OSD using cephadm but it fails with the message found below. Do you have any ide what may be wrong? The given device used to be in the cluster but it has been removed, and now the device appears as available in the `ceph orch device ls`.

Thank you,
Laszlo

root@monitor1:~# ceph orch device ls| grep storage3
storage3  /dev/sdb  hdd   ATA_QEMU_HARDDISK_QM00002  10.7G No         15m ago    Insufficient space (<10 extents) on vgs, LVM detected, locked
storage3  /dev/sdc  hdd   ATA_QEMU_HARDDISK_QM00003  10.7G Yes        15m ago
storage3  /dev/sdd  hdd   ATA_QEMU_HARDDISK_QM00004  8589M No         15m ago locked
root@monitor1:~#

Here it is the error:


Error EINVAL: Traceback (most recent call last):
  File "/usr/share/ceph/mgr/mgr_module.py", line 1756, in _handle_command
    return self.handle_command(inbuf, cmd)
  File "/usr/share/ceph/mgr/orchestrator/_interface.py", line 171, in handle_command
    return dispatch[cmd['prefix']].call(self, cmd, inbuf)
  File "/usr/share/ceph/mgr/mgr_module.py", line 462, in call
    return self.func(mgr, **kwargs)
  File "/usr/share/ceph/mgr/orchestrator/_interface.py", line 107, in <lambda>
    wrapper_copy = lambda *l_args, **l_kwargs: wrapper(*l_args, **l_kwargs)  # noqa: E731
  File "/usr/share/ceph/mgr/orchestrator/_interface.py", line 96, in wrapper
    return func(*args, **kwargs)
  File "/usr/share/ceph/mgr/orchestrator/module.py", line 843, in _daemon_add_osd
    raise_if_exception(completion)
  File "/usr/share/ceph/mgr/orchestrator/_interface.py", line 228, in raise_if_exception
    raise e
RuntimeError: cephadm exited with an error code: 1, stderr:Inferring config /var/lib/ceph/314d068c-56ee-11ee-87e2-cd6d389cbfb8/config/ceph.conf
Non-zero exit code 1 from /usr/bin/docker run --rm --ipc=host --stop-signal=SIGTERM --net=host --entrypoint /usr/sbin/ceph-volume --privileged --group-add=disk --init -e CONTAINER_IMAGE=quay.io/ceph/ceph@sha256:6b0a24e3146d4723700ce6579d40e6016b2c63d9bf90422653f2d4caa49be232 -e NODE_NAME=storage3 -e CEPH_USE_RANDOM_NONCE=1 -e CEPH_VOLUME_OSDSPEC_AFFINITY=None -e CEPH_VOLUME_SKIP_RESTORECON=yes -e CEPH_VOLUME_DEBUG=1 -v /var/run/ceph/314d068c-56ee-11ee-87e2-cd6d389cbfb8:/var/run/ceph:z -v /var/log/ceph/314d068c-56ee-11ee-87e2-cd6d389cbfb8:/var/log/ceph:z -v /var/lib/ceph/314d068c-56ee-11ee-87e2-cd6d389cbfb8/crash:/var/lib/ceph/crash:z -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm -v /:/rootfs -v /tmp/ceph-tmp4r8kteec:/etc/ceph/ceph.conf:z -v /tmp/ceph-tmppw___l6k:/var/lib/ceph/bootstrap-osd/ceph.keyring:z quay.io/ceph/ceph@sha256:6b0a24e3146d4723700ce6579d40e6016b2c63d9bf90422653f2d4caa49be232 lvm batch --no-auto /dev/sdc --yes --no-systemd
/usr/bin/docker: stderr --> passed data devices: 1 physical, 0 LVM
/usr/bin/docker: stderr --> relative data size: 1.0
/usr/bin/docker: stderr Running command: /usr/bin/ceph-authtool --gen-print-key
/usr/bin/docker: stderr Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new d90dcdee-035c-4f3c-80f6-5d3eed25d598
/usr/bin/docker: stderr Running command: nsenter --mount=/rootfs/proc/1/ns/mnt --ipc=/rootfs/proc/1/ns/ipc --net=/rootfs/proc/1/ns/net --uts=/rootfs/proc/1/ns/uts /sbin/vgcreate --force --yes ceph-cf156193-5f39-4bfd-91c0-4e1d50fe0e4e /dev/sdc
/usr/bin/docker: stderr  stdout: Physical volume "/dev/sdc" successfully created.
/usr/bin/docker: stderr  stdout: Volume group "ceph-cf156193-5f39-4bfd-91c0-4e1d50fe0e4e" successfully created
/usr/bin/docker: stderr Running command: nsenter --mount=/rootfs/proc/1/ns/mnt --ipc=/rootfs/proc/1/ns/ipc --net=/rootfs/proc/1/ns/net --uts=/rootfs/proc/1/ns/uts /sbin/lvcreate --yes -l 2559 -n osd-block-d90dcdee-035c-4f3c-80f6-5d3eed25d598 ceph-cf156193-5f39-4bfd-91c0-4e1d50fe0e4e
/usr/bin/docker: stderr  stdout: Logical volume "osd-block-d90dcdee-035c-4f3c-80f6-5d3eed25d598" created.
/usr/bin/docker: stderr Running command: /usr/bin/ceph-authtool --gen-print-key
/usr/bin/docker: stderr Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-6
/usr/bin/docker: stderr Running command: /usr/bin/chown -h ceph:ceph /dev/ceph-cf156193-5f39-4bfd-91c0-4e1d50fe0e4e/osd-block-d90dcdee-035c-4f3c-80f6-5d3eed25d598
/usr/bin/docker: stderr Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
/usr/bin/docker: stderr Running command: /usr/bin/ln -s /dev/ceph-cf156193-5f39-4bfd-91c0-4e1d50fe0e4e/osd-block-d90dcdee-035c-4f3c-80f6-5d3eed25d598 /var/lib/ceph/osd/ceph-6/block
/usr/bin/docker: stderr Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-6/activate.monmap
/usr/bin/docker: stderr  stderr: got monmap epoch 3
/usr/bin/docker: stderr --> Creating keyring file for osd.6
/usr/bin/docker: stderr Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-6/keyring
/usr/bin/docker: stderr Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-6/
/usr/bin/docker: stderr Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 6 --monmap /var/lib/ceph/osd/ceph-6/activate.monmap --keyfile - --osdspec-affinity None --osd-data /var/lib/ceph/osd/ceph-6/ --osd-uuid d90dcdee-035c-4f3c-80f6-5d3eed25d598 --setuser ceph --setgroup ceph
/usr/bin/docker: stderr  stderr: 2023-09-20T19:34:34.916+0000 7f77400f1540 -1 bluestore(/var/lib/ceph/osd/ceph-6/) _read_fsid unparsable uuid
/usr/bin/docker: stderr  stderr: 2023-09-20T19:34:35.152+0000 7f77400f1540 -1 bluestore(/var/lib/ceph/osd/ceph-6//block) _read_bdev_label failed to open /var/lib/ceph/osd/ceph-6//block: (13) Permission denied
/usr/bin/docker: stderr  stderr: 2023-09-20T19:34:35.152+0000 7f77400f1540 -1 bluestore(/var/lib/ceph/osd/ceph-6//block) _read_bdev_label failed to open /var/lib/ceph/osd/ceph-6//block: (13) Permission denied
/usr/bin/docker: stderr  stderr: 2023-09-20T19:34:35.152+0000 7f77400f1540 -1 bluestore(/var/lib/ceph/osd/ceph-6//block) _read_bdev_label failed to open /var/lib/ceph/osd/ceph-6//block: (13) Permission denied
/usr/bin/docker: stderr  stderr: 2023-09-20T19:34:35.156+0000 7f77400f1540 -1 bluestore(/var/lib/ceph/osd/ceph-6//block) _read_bdev_label failed to open /var/lib/ceph/osd/ceph-6//block: (13) Permission denied
/usr/bin/docker: stderr  stderr: 2023-09-20T19:34:35.156+0000 7f77400f1540 -1 bluestore(/var/lib/ceph/osd/ceph-6//block) _read_bdev_label failed to open /var/lib/ceph/osd/ceph-6//block: (13) Permission denied
/usr/bin/docker: stderr  stderr: 2023-09-20T19:34:35.156+0000 7f77400f1540 -1 bluestore(/var/lib/ceph/osd/ceph-6//block) _read_bdev_label failed to open /var/lib/ceph/osd/ceph-6//block: (13) Permission denied
/usr/bin/docker: stderr  stderr: 2023-09-20T19:34:35.712+0000 7f77400f1540 -1 bluestore(/var/lib/ceph/osd/ceph-6//block) _read_bdev_label failed to open /var/lib/ceph/osd/ceph-6//block: (13) Permission denied
/usr/bin/docker: stderr  stderr: 2023-09-20T19:34:35.712+0000 7f77400f1540 -1 bdev(0x563bb89db400 /var/lib/ceph/osd/ceph-6//block) open open got: (13) Permission denied
/usr/bin/docker: stderr  stderr: 2023-09-20T19:34:35.712+0000 7f77400f1540 -1 OSD::mkfs: ObjectStore::mkfs failed with error (13) Permission denied
/usr/bin/docker: stderr  stderr: 2023-09-20T19:34:35.712+0000 7f77400f1540 -1  ** ERROR: error creating empty object store in /var/lib/ceph/osd/ceph-6/: (13) Permission denied
/usr/bin/docker: stderr --> Was unable to complete a new OSD, will rollback changes
/usr/bin/docker: stderr Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring osd purge-new osd.6 --yes-i-really-mean-it
/usr/bin/docker: stderr  stderr: purged osd.6
/usr/bin/docker: stderr --> Zapping: /dev/ceph-cf156193-5f39-4bfd-91c0-4e1d50fe0e4e/osd-block-d90dcdee-035c-4f3c-80f6-5d3eed25d598
/usr/bin/docker: stderr --> Unmounting /var/lib/ceph/osd/ceph-6
/usr/bin/docker: stderr Running command: /usr/bin/umount -v /var/lib/ceph/osd/ceph-6
/usr/bin/docker: stderr  stderr: umount: /var/lib/ceph/osd/ceph-6 unmounted
/usr/bin/docker: stderr Running command: /usr/bin/dd if=/dev/zero of=/dev/ceph-cf156193-5f39-4bfd-91c0-4e1d50fe0e4e/osd-block-d90dcdee-035c-4f3c-80f6-5d3eed25d598 bs=1M count=10 conv=fsync
/usr/bin/docker: stderr --> Only 1 LV left in VG, will proceed to destroy volume group ceph-cf156193-5f39-4bfd-91c0-4e1d50fe0e4e
/usr/bin/docker: stderr Running command: nsenter --mount=/rootfs/proc/1/ns/mnt --ipc=/rootfs/proc/1/ns/ipc --net=/rootfs/proc/1/ns/net --uts=/rootfs/proc/1/ns/uts /sbin/vgremove -v -f ceph-cf156193-5f39-4bfd-91c0-4e1d50fe0e4e
/usr/bin/docker: stderr  stderr: Removing ceph--cf156193--5f39--4bfd--91c0--4e1d50fe0e4e-osd--block--d90dcdee--035c--4f3c--80f6--5d3eed25d598 (253:1)
/usr/bin/docker: stderr  stderr: Archiving volume group "ceph-cf156193-5f39-4bfd-91c0-4e1d50fe0e4e" metadata (seqno 5).
/usr/bin/docker: stderr   Releasing logical volume "osd-block-d90dcdee-035c-4f3c-80f6-5d3eed25d598"
/usr/bin/docker: stderr  stderr: Creating volume group backup "/etc/lvm/backup/ceph-cf156193-5f39-4bfd-91c0-4e1d50fe0e4e" (seqno 6).
/usr/bin/docker: stderr  stdout: Logical volume "osd-block-d90dcdee-035c-4f3c-80f6-5d3eed25d598" successfully removed
/usr/bin/docker: stderr  stderr: Removing physical volume "/dev/sdc" from volume group "ceph-cf156193-5f39-4bfd-91c0-4e1d50fe0e4e"
/usr/bin/docker: stderr  stdout: Volume group "ceph-cf156193-5f39-4bfd-91c0-4e1d50fe0e4e" successfully removed
/usr/bin/docker: stderr Running command: nsenter --mount=/rootfs/proc/1/ns/mnt --ipc=/rootfs/proc/1/ns/ipc --net=/rootfs/proc/1/ns/net --uts=/rootfs/proc/1/ns/uts /sbin/pvremove -v -f -f /dev/sdc
/usr/bin/docker: stderr  stdout: Labels on physical volume "/dev/sdc" successfully wiped.
/usr/bin/docker: stderr --> Zapping successful for OSD: 6
/usr/bin/docker: stderr Traceback (most recent call last):
/usr/bin/docker: stderr   File "/usr/sbin/ceph-volume", line 11, in <module>
/usr/bin/docker: stderr load_entry_point('ceph-volume==1.0.0', 'console_scripts', 'ceph-volume')()
/usr/bin/docker: stderr   File "/usr/lib/python3.6/site-packages/ceph_volume/main.py", line 41, in __init__
/usr/bin/docker: stderr     self.main(self.argv)
/usr/bin/docker: stderr   File "/usr/lib/python3.6/site-packages/ceph_volume/decorators.py", line 59, in newfunc
/usr/bin/docker: stderr     return f(*a, **kw)
/usr/bin/docker: stderr   File "/usr/lib/python3.6/site-packages/ceph_volume/main.py", line 153, in main
/usr/bin/docker: stderr     terminal.dispatch(self.mapper, subcommand_args)
/usr/bin/docker: stderr   File "/usr/lib/python3.6/site-packages/ceph_volume/terminal.py", line 194, in dispatch
/usr/bin/docker: stderr     instance.main()
/usr/bin/docker: stderr   File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/main.py", line 46, in main
/usr/bin/docker: stderr     terminal.dispatch(self.mapper, self.argv)
/usr/bin/docker: stderr   File "/usr/lib/python3.6/site-packages/ceph_volume/terminal.py", line 194, in dispatch
/usr/bin/docker: stderr     instance.main()
/usr/bin/docker: stderr   File "/usr/lib/python3.6/site-packages/ceph_volume/decorators.py", line 16, in is_root
/usr/bin/docker: stderr     return func(*a, **kw)
/usr/bin/docker: stderr   File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/batch.py", line 441, in main
/usr/bin/docker: stderr     self._execute(plan)
/usr/bin/docker: stderr   File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/batch.py", line 460, in _execute
/usr/bin/docker: stderr     c.create(argparse.Namespace(**args))
/usr/bin/docker: stderr   File "/usr/lib/python3.6/site-packages/ceph_volume/decorators.py", line 16, in is_root
/usr/bin/docker: stderr     return func(*a, **kw)
/usr/bin/docker: stderr   File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/create.py", line 26, in create
/usr/bin/docker: stderr     prepare_step.safe_prepare(args)
/usr/bin/docker: stderr   File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/prepare.py", line 252, in safe_prepare
/usr/bin/docker: stderr     self.prepare()
/usr/bin/docker: stderr   File "/usr/lib/python3.6/site-packages/ceph_volume/decorators.py", line 16, in is_root
/usr/bin/docker: stderr     return func(*a, **kw)
/usr/bin/docker: stderr   File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/prepare.py", line 394, in prepare
/usr/bin/docker: stderr     osd_fsid,
/usr/bin/docker: stderr   File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/prepare.py", line 119, in prepare_bluestore
/usr/bin/docker: stderr     db=db
/usr/bin/docker: stderr   File "/usr/lib/python3.6/site-packages/ceph_volume/util/prepare.py", line 484, in osd_mkfs_bluestore
/usr/bin/docker: stderr     raise RuntimeError('Command failed with exit code %s: %s' % (returncode, ' '.join(command)))
/usr/bin/docker: stderr RuntimeError: Command failed with exit code 250: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 6 --monmap /var/lib/ceph/osd/ceph-6/activate.monmap --keyfile - --osdspec-affinity None --osd-data /var/lib/ceph/osd/ceph-6/ --osd-uuid d90dcdee-035c-4f3c-80f6-5d3eed25d598 --setuser ceph --setgroup ceph
Traceback (most recent call last):
  File "/var/lib/ceph/314d068c-56ee-11ee-87e2-cd6d389cbfb8/cephadm.7ab03136237675497d535fb1b85d1d0f95bbe5b95f32cd4e6f3ca71a9f97bf3c", line 9653, in <module>
    main()
  File "/var/lib/ceph/314d068c-56ee-11ee-87e2-cd6d389cbfb8/cephadm.7ab03136237675497d535fb1b85d1d0f95bbe5b95f32cd4e6f3ca71a9f97bf3c", line 9641, in main
    r = ctx.func(ctx)
  File "/var/lib/ceph/314d068c-56ee-11ee-87e2-cd6d389cbfb8/cephadm.7ab03136237675497d535fb1b85d1d0f95bbe5b95f32cd4e6f3ca71a9f97bf3c", line 2153, in _infer_config
    return func(ctx)
  File "/var/lib/ceph/314d068c-56ee-11ee-87e2-cd6d389cbfb8/cephadm.7ab03136237675497d535fb1b85d1d0f95bbe5b95f32cd4e6f3ca71a9f97bf3c", line 2069, in _infer_fsid
    return func(ctx)
  File "/var/lib/ceph/314d068c-56ee-11ee-87e2-cd6d389cbfb8/cephadm.7ab03136237675497d535fb1b85d1d0f95bbe5b95f32cd4e6f3ca71a9f97bf3c", line 2181, in _infer_image
    return func(ctx)
  File "/var/lib/ceph/314d068c-56ee-11ee-87e2-cd6d389cbfb8/cephadm.7ab03136237675497d535fb1b85d1d0f95bbe5b95f32cd4e6f3ca71a9f97bf3c", line 2056, in _validate_fsid
    return func(ctx)
  File "/var/lib/ceph/314d068c-56ee-11ee-87e2-cd6d389cbfb8/cephadm.7ab03136237675497d535fb1b85d1d0f95bbe5b95f32cd4e6f3ca71a9f97bf3c", line 6254, in command_ceph_volume
    out, err, code = call_throws(ctx, c.run_cmd(), verbosity=CallVerbosity.QUIET_UNLESS_ERROR)
  File "/var/lib/ceph/314d068c-56ee-11ee-87e2-cd6d389cbfb8/cephadm.7ab03136237675497d535fb1b85d1d0f95bbe5b95f32cd4e6f3ca71a9f97bf3c", line 1853, in call_throws
    raise RuntimeError('Failed command: %s' % ' '.join(command))
RuntimeError: Failed command: /usr/bin/docker run --rm --ipc=host --stop-signal=SIGTERM --net=host --entrypoint /usr/sbin/ceph-volume --privileged --group-add=disk --init -e CONTAINER_IMAGE=quay.io/ceph/ceph@sha256:6b0a24e3146d4723700ce6579d40e6016b2c63d9bf90422653f2d4caa49be232 -e NODE_NAME=storage3 -e CEPH_USE_RANDOM_NONCE=1 -e CEPH_VOLUME_OSDSPEC_AFFINITY=None -e CEPH_VOLUME_SKIP_RESTORECON=yes -e CEPH_VOLUME_DEBUG=1 -v /var/run/ceph/314d068c-56ee-11ee-87e2-cd6d389cbfb8:/var/run/ceph:z -v /var/log/ceph/314d068c-56ee-11ee-87e2-cd6d389cbfb8:/var/log/ceph:z -v /var/lib/ceph/314d068c-56ee-11ee-87e2-cd6d389cbfb8/crash:/var/lib/ceph/crash:z -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm -v /:/rootfs -v /tmp/ceph-tmp4r8kteec:/etc/ceph/ceph.conf:z -v /tmp/ceph-tmppw___l6k:/var/lib/ceph/bootstrap-osd/ceph.keyring:z quay.io/ceph/ceph@sha256:6b0a24e3146d4723700ce6579d40e6016b2c63d9bf90422653f2d4caa49be232 lvm batch --no-auto /dev/sdc --yes --no-systemd
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux