Re: Unable to add OSD

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

were the disks properly wiped before adding them to the cluster? I would suggest to run ‚cephadm ceph-volume lvm zap --destroy /dev/nvme0n1‘ locally on a node and check if the error still occurs after a couple of minutes.

Zitat von filip Mutterer <filip@xxxxxxx>:

Hi,

I am trying to add OSDs to freshly installed 3 Node Ceph Cluster but don't know how to solve my problem.

It looks like all nodes share the same problem.

Kernel Options "cgroup_memory=1 cgroup_enable=memory" are set.

ceph orch apply osd --all-available-devices --unmanaged=false

ceph health detail
HEALTH_WARN Failed to apply 1 service(s): osd.all-available-devices; OSD count 0 < osd_pool_default_size 3 [WRN] CEPHADM_APPLY_SPEC_FAIL: Failed to apply 1 service(s): osd.all-available-devices     osd.all-available-devices: cephadm exited with an error code: 1, stderr:Inferring config /var/lib/ceph/9df6f788-ff59-11ef-849b-d83adde0aaba/mon.pi5n2/config Non-zero exit code 1 from /usr/bin/podman run --rm --ipc=host --stop-signal=SIGTERM --net=host --entrypoint /usr/sbin/ceph-volume --privileged --group-add=disk --init -e CONTAINER_IMAGE=quay.io/ceph/ceph@sha256:259cbcf17122953765b8febf2e01682caf11c883bd41ffcea12d2afb88f5b9b5 -e NODE_NAME=pi5n2 -e CEPH_USE_RANDOM_NONCE=1 -e CEPH_VOLUME_OSDSPEC_AFFINITY=all-available-devices -e CEPH_VOLUME_SKIP_RESTORECON=yes -e CEPH_VOLUME_DEBUG=1 -v /var/run/ceph/9df6f788-ff59-11ef-849b-d83adde0aaba:/var/run/ceph:z -v /var/log/ceph/9df6f788-ff59-11ef-849b-d83adde0aaba:/var/log/ceph:z -v /var/lib/ceph/9df6f788-ff59-11ef-849b-d83adde0aaba/crash:/var/lib/ceph/crash:z -v /run/systemd/journal:/run/systemd/journal -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm -v /:/rootfs -v /etc/hosts:/etc/hosts:ro -v /tmp/ceph-tmpk8ddqklv:/etc/ceph/ceph.conf:z -v /tmp/ceph-tmp467idmf9:/var/lib/ceph/bootstrap-osd/ceph.keyring:z quay.io/ceph/ceph@sha256:259cbcf17122953765b8febf2e01682caf11c883bd41ffcea12d2afb88f5b9b5 lvm batch --no-auto /dev/nvme0n1 --yes --no-systemd
/usr/bin/podman: stderr --> passed data devices: 1 physical, 0 LVM
/usr/bin/podman: stderr --> relative data size: 1.0
/usr/bin/podman: stderr Running command: /usr/bin/ceph-authtool --gen-print-key /usr/bin/podman: stderr Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new a7ca71ee-9a57-4fda-a73d-4ba3373df344 /usr/bin/podman: stderr Running command: nsenter --mount=/rootfs/proc/1/ns/mnt --ipc=/rootfs/proc/1/ns/ipc --net=/rootfs/proc/1/ns/net --uts=/rootfs/proc/1/ns/uts /sbin/vgcreate --force --yes ceph-722ced72-3e8c-41bb-946b-94d10f6c9734 /dev/nvme0n1 /usr/bin/podman: stderr  stdout: Physical volume "/dev/nvme0n1" successfully created. /usr/bin/podman: stderr  stdout: Volume group "ceph-722ced72-3e8c-41bb-946b-94d10f6c9734" successfully created /usr/bin/podman: stderr Running command: nsenter --mount=/rootfs/proc/1/ns/mnt --ipc=/rootfs/proc/1/ns/ipc --net=/rootfs/proc/1/ns/net --uts=/rootfs/proc/1/ns/uts /sbin/lvcreate --yes -l 244190 -n osd-block-a7ca71ee-9a57-4fda-a73d-4ba3373df344 ceph-722ced72-3e8c-41bb-946b-94d10f6c9734 /usr/bin/podman: stderr  stdout: Logical volume "osd-block-a7ca71ee-9a57-4fda-a73d-4ba3373df344" created. /usr/bin/podman: stderr Running command: /usr/bin/ceph-authtool --gen-print-key /usr/bin/podman: stderr Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-2 /usr/bin/podman: stderr Running command: /usr/bin/chown -h ceph:ceph /dev/ceph-722ced72-3e8c-41bb-946b-94d10f6c9734/osd-block-a7ca71ee-9a57-4fda-a73d-4ba3373df344 /usr/bin/podman: stderr Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0 /usr/bin/podman: stderr Running command: /usr/bin/ln -s /dev/ceph-722ced72-3e8c-41bb-946b-94d10f6c9734/osd-block-a7ca71ee-9a57-4fda-a73d-4ba3373df344 /var/lib/ceph/osd/ceph-2/block /usr/bin/podman: stderr Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-2/activate.monmap
/usr/bin/podman: stderr  stderr: got monmap epoch 3
/usr/bin/podman: stderr --> Creating keyring file for osd.2
/usr/bin/podman: stderr Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/keyring /usr/bin/podman: stderr Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/ /usr/bin/podman: stderr Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 2 --monmap /var/lib/ceph/osd/ceph-2/activate.monmap --keyfile - --osdspec-affinity all-available-devices --osd-data /var/lib/ceph/osd/ceph-2/ --osd-uuid a7ca71ee-9a57-4fda-a73d-4ba3373df344 --setuser ceph --setgroup ceph /usr/bin/podman: stderr --> Was unable to complete a new OSD, will rollback changes /usr/bin/podman: stderr Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring osd purge-new osd.2 --yes-i-really-mean-it
/usr/bin/podman: stderr  stderr: purged osd.2
/usr/bin/podman: stderr --> Zapping: /dev/ceph-722ced72-3e8c-41bb-946b-94d10f6c9734/osd-block-a7ca71ee-9a57-4fda-a73d-4ba3373df344
/usr/bin/podman: stderr --> Unmounting /var/lib/ceph/osd/ceph-2
/usr/bin/podman: stderr Running command: /usr/bin/umount -v /var/lib/ceph/osd/ceph-2
/usr/bin/podman: stderr  stderr: umount: /var/lib/ceph/osd/ceph-2 unmounted
/usr/bin/podman: stderr Running command: /usr/bin/dd if=/dev/zero of=/dev/ceph-722ced72-3e8c-41bb-946b-94d10f6c9734/osd-block-a7ca71ee-9a57-4fda-a73d-4ba3373df344 bs=1M count=10 conv=fsync
/usr/bin/podman: stderr  stderr: 10+0 records in
/usr/bin/podman: stderr 10+0 records out
/usr/bin/podman: stderr  stderr: 10485760 bytes (10 MB, 10 MiB) copied, 0.0413744 s, 253 MB/s /usr/bin/podman: stderr --> Only 1 LV left in VG, will proceed to destroy volume group ceph-722ced72-3e8c-41bb-946b-94d10f6c9734 /usr/bin/podman: stderr Running command: nsenter --mount=/rootfs/proc/1/ns/mnt --ipc=/rootfs/proc/1/ns/ipc --net=/rootfs/proc/1/ns/net --uts=/rootfs/proc/1/ns/uts /sbin/vgremove -v -f ceph-722ced72-3e8c-41bb-946b-94d10f6c9734 /usr/bin/podman: stderr  stderr: Removing ceph--722ced72--3e8c--41bb--946b--94d10f6c9734-osd--block--a7ca71ee--9a57--4fda--a73d--4ba3373df344 (254:0) /usr/bin/podman: stderr  stderr: Releasing logical volume "osd-block-a7ca71ee-9a57-4fda-a73d-4ba3373df344" /usr/bin/podman: stderr   Archiving volume group "ceph-722ced72-3e8c-41bb-946b-94d10f6c9734" metadata (seqno 5). /usr/bin/podman: stderr  stdout: Logical volume "osd-block-a7ca71ee-9a57-4fda-a73d-4ba3373df344" successfully removed. /usr/bin/podman: stderr  stderr: Removing physical volume "/dev/nvme0n1" from volume group "ceph-722ced72-3e8c-41bb-946b-94d10f6c9734" /usr/bin/podman: stderr  stdout: Volume group "ceph-722ced72-3e8c-41bb-946b-94d10f6c9734" successfully removed /usr/bin/podman: stderr  stderr: Creating volume group backup "/etc/lvm/backup/ceph-722ced72-3e8c-41bb-946b-94d10f6c9734" (seqno 6). /usr/bin/podman: stderr Running command: nsenter --mount=/rootfs/proc/1/ns/mnt --ipc=/rootfs/proc/1/ns/ipc --net=/rootfs/proc/1/ns/net --uts=/rootfs/proc/1/ns/uts /sbin/pvremove -v -f -f /dev/nvme0n1 /usr/bin/podman: stderr  stdout: Labels on physical volume "/dev/nvme0n1" successfully wiped.
/usr/bin/podman: stderr --> Zapping successful for OSD: 2
/usr/bin/podman: stderr Traceback (most recent call last):
/usr/bin/podman: stderr   File "/usr/sbin/ceph-volume", line 33, in <module>
/usr/bin/podman: stderr sys.exit(load_entry_point('ceph-volume==1.0.0', 'console_scripts', 'ceph-volume')()) /usr/bin/podman: stderr   File "/usr/lib/python3.9/site-packages/ceph_volume/main.py", line 41, in __init__
/usr/bin/podman: stderr     self.main(self.argv)
/usr/bin/podman: stderr   File "/usr/lib/python3.9/site-packages/ceph_volume/decorators.py", line 59, in newfunc
/usr/bin/podman: stderr     return f(*a, **kw)
/usr/bin/podman: stderr   File "/usr/lib/python3.9/site-packages/ceph_volume/main.py", line 153, in main
/usr/bin/podman: stderr     terminal.dispatch(self.mapper, subcommand_args)
/usr/bin/podman: stderr   File "/usr/lib/python3.9/site-packages/ceph_volume/terminal.py", line 194, in dispatch
/usr/bin/podman: stderr     instance.main()
/usr/bin/podman: stderr   File "/usr/lib/python3.9/site-packages/ceph_volume/devices/lvm/main.py", line 46, in main
/usr/bin/podman: stderr     terminal.dispatch(self.mapper, self.argv)
/usr/bin/podman: stderr   File "/usr/lib/python3.9/site-packages/ceph_volume/terminal.py", line 194, in dispatch
/usr/bin/podman: stderr     instance.main()
/usr/bin/podman: stderr   File "/usr/lib/python3.9/site-packages/ceph_volume/decorators.py", line 16, in is_root
/usr/bin/podman: stderr     return func(*a, **kw)
/usr/bin/podman: stderr   File "/usr/lib/python3.9/site-packages/ceph_volume/devices/lvm/batch.py", line 414, in main
/usr/bin/podman: stderr     self._execute(plan)
/usr/bin/podman: stderr   File "/usr/lib/python3.9/site-packages/ceph_volume/devices/lvm/batch.py", line 432, in _execute
/usr/bin/podman: stderr     c.create(argparse.Namespace(**args))
/usr/bin/podman: stderr   File "/usr/lib/python3.9/site-packages/ceph_volume/decorators.py", line 16, in is_root
/usr/bin/podman: stderr     return func(*a, **kw)
/usr/bin/podman: stderr   File "/usr/lib/python3.9/site-packages/ceph_volume/devices/lvm/create.py", line 26, in create
/usr/bin/podman: stderr     prepare_step.safe_prepare(args)
/usr/bin/podman: stderr   File "/usr/lib/python3.9/site-packages/ceph_volume/devices/lvm/prepare.py", line 196, in safe_prepare
/usr/bin/podman: stderr     self.prepare()
/usr/bin/podman: stderr   File "/usr/lib/python3.9/site-packages/ceph_volume/decorators.py", line 16, in is_root
/usr/bin/podman: stderr     return func(*a, **kw)
/usr/bin/podman: stderr   File "/usr/lib/python3.9/site-packages/ceph_volume/devices/lvm/prepare.py", line 278, in prepare
/usr/bin/podman: stderr     prepare_bluestore(
/usr/bin/podman: stderr   File "/usr/lib/python3.9/site-packages/ceph_volume/devices/lvm/prepare.py", line 59, in prepare_bluestore
/usr/bin/podman: stderr     prepare_utils.osd_mkfs_bluestore(
/usr/bin/podman: stderr   File "/usr/lib/python3.9/site-packages/ceph_volume/util/prepare.py", line 459, in osd_mkfs_bluestore /usr/bin/podman: stderr     raise RuntimeError('Command failed with exit code %s: %s' % (returncode, ' '.join(command))) /usr/bin/podman: stderr RuntimeError: Command failed with exit code -11: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 2 --monmap /var/lib/ceph/osd/ceph-2/activate.monmap --keyfile - --osdspec-affinity all-available-devices --osd-data /var/lib/ceph/osd/ceph-2/ --osd-uuid a7ca71ee-9a57-4fda-a73d-4ba3373df344 --setuser ceph --setgroup ceph
Traceback (most recent call last):
  File "<frozen runpy>", line 198, in _run_module_as_main
  File "<frozen runpy>", line 88, in _run_code
  File "/var/lib/ceph/9df6f788-ff59-11ef-849b-d83adde0aaba/cephadm.91b52e446d8f1d91339889933063a5070027dc00f54d563f523727c6dd22b172/__main__.py", line 10889, in <module>
    main()
    ~~~~^^
  File "/var/lib/ceph/9df6f788-ff59-11ef-849b-d83adde0aaba/cephadm.91b52e446d8f1d91339889933063a5070027dc00f54d563f523727c6dd22b172/__main__.py", line 10877, in main
    r = ctx.func(ctx)
  File "/var/lib/ceph/9df6f788-ff59-11ef-849b-d83adde0aaba/cephadm.91b52e446d8f1d91339889933063a5070027dc00f54d563f523727c6dd22b172/__main__.py", line 2576, in _infer_config
    return func(ctx)
  File "/var/lib/ceph/9df6f788-ff59-11ef-849b-d83adde0aaba/cephadm.91b52e446d8f1d91339889933063a5070027dc00f54d563f523727c6dd22b172/__main__.py", line 2492, in _infer_fsid
    return func(ctx)
  File "/var/lib/ceph/9df6f788-ff59-11ef-849b-d83adde0aaba/cephadm.91b52e446d8f1d91339889933063a5070027dc00f54d563f523727c6dd22b172/__main__.py", line 2604, in _infer_image
    return func(ctx)
  File "/var/lib/ceph/9df6f788-ff59-11ef-849b-d83adde0aaba/cephadm.91b52e446d8f1d91339889933063a5070027dc00f54d563f523727c6dd22b172/__main__.py", line 2479, in _validate_fsid
    return func(ctx)
  File "/var/lib/ceph/9df6f788-ff59-11ef-849b-d83adde0aaba/cephadm.91b52e446d8f1d91339889933063a5070027dc00f54d563f523727c6dd22b172/__main__.py", line 7145, in command_ceph_volume     out, err, code = call_throws(ctx, c.run_cmd(), verbosity=CallVerbosity.QUIET_UNLESS_ERROR)
~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/var/lib/ceph/9df6f788-ff59-11ef-849b-d83adde0aaba/cephadm.91b52e446d8f1d91339889933063a5070027dc00f54d563f523727c6dd22b172/__main__.py", line 2267, in call_throws
    raise RuntimeError('Failed command: %s' % ' '.join(command))
RuntimeError: Failed command: /usr/bin/podman run --rm --ipc=host --stop-signal=SIGTERM --net=host --entrypoint /usr/sbin/ceph-volume --privileged --group-add=disk --init -e CONTAINER_IMAGE=quay.io/ceph/ceph@sha256:259cbcf17122953765b8febf2e01682caf11c883bd41ffcea12d2afb88f5b9b5 -e NODE_NAME=pi5n2 -e CEPH_USE_RANDOM_NONCE=1 -e CEPH_VOLUME_OSDSPEC_AFFINITY=all-available-devices -e CEPH_VOLUME_SKIP_RESTORECON=yes -e CEPH_VOLUME_DEBUG=1 -v /var/run/ceph/9df6f788-ff59-11ef-849b-d83adde0aaba:/var/run/ceph:z -v /var/log/ceph/9df6f788-ff59-11ef-849b-d83adde0aaba:/var/log/ceph:z -v /var/lib/ceph/9df6f788-ff59-11ef-849b-d83adde0aaba/crash:/var/lib/ceph/crash:z -v /run/systemd/journal:/run/systemd/journal -v /dev:/dev -v /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v /run/lock/lvm:/run/lock/lvm -v /:/rootfs -v /etc/hosts:/etc/hosts:ro -v /tmp/ceph-tmpk8ddqklv:/etc/ceph/ceph.conf:z -v /tmp/ceph-tmp467idmf9:/var/lib/ceph/bootstrap-osd/ceph.keyring:z quay.io/ceph/ceph@sha256:259cbcf17122953765b8febf2e01682caf11c883bd41ffcea12d2afb88f5b9b5 lvm batch --no-auto /dev/nvme0n1 --yes --no-systemd
[WRN] TOO_FEW_OSDS: OSD count 0 < osd_pool_default_size 3

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux