Re: ceph orch issue: lsblk: /dev/vg_osd/lvm_osd: not a block device

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Not sure you need (or you should) prepare the block device manualy, ceph
can handle these tasks. Did you try to cleanup and retry by providing
/dev/sda6 with the ceph orch daemon add ?

On Sun, May 26, 2024, 10:50 duluxoz <duluxoz@xxxxxxxxx> wrote:

> Hi All,
>
> Is the following a bug or some other problem (I can't tell)  :-)
>
> Brand new Ceph (Reef v18.2.3) install on Rocky Linux v9.4 - basically,
> its a brand new box.
>
> Ran the following commands (in order; no issues until final command):
>
>  1. pvcreate /dev/sda6
>  2. vgcreate vg_osd /dev/sda6
>  3. lvcreate -l 100%VG -n lv_osd vg_osd
>  4. cephadmbootstrap--mon-ip192.168.0.20
>  5. ceph orch daemon add osd ceph1:/dev/vg_osd/lvm_osd
>
> Received a whole bunch of error info on the console; the two relevant
> lines (as far as I can tell) are:
>
>   * /usr/bin/podman: stderr  stderr: lsblk: /dev/vg_osd/lvm_osd: not a
>     block device
>   * RuntimeError: Failed command: /usr/bin/podman run --rm --ipc=host
>     --stop-signal=SIGTERM --net=host --entrypoint /usr/sbin/ceph-volume
>     --privileged --group-add=disk --init -e
>     CONTAINER_IMAGE=
> quay.io/ceph/ceph@sha256:257b3f5140c11b51fd710ffdad6213ed53d74146f464a51717262d156daef553
>     -e NODE_NAME=ceph1 -e CEPH_USE_RANDOM_NONCE=1 -e
>     CEPH_VOLUME_OSDSPEC_AFFINITY=None -e CEPH_VOLUME_SKIP_RESTORECON=yes
>     -e CEPH_VOLUME_DEBUG=1 -v
>     /var/run/ceph/477045f4-1b34-11ef-9a30-0800274c7359:/var/run/ceph:z
>     -v
>     /var/log/ceph/477045f4-1b34-11ef-9a30-0800274c7359:/var/log/ceph:z
>     -v
>
> /var/lib/ceph/477045f4-1b34-11ef-9a30-0800274c7359/crash:/var/lib/ceph/crash:z
>     -v /run/systemd/journal:/run/systemd/journal -v /dev:/dev -v
>     /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v
>     /run/lock/lvm:/run/lock/lvm -v
>
> /var/lib/ceph/477045f4-1b34-11ef-9a30-0800274c7359/selinux:/sys/fs/selinux:ro
>     -v /:/rootfs -v /etc/hosts:/etc/hosts:ro -v
>     /tmp/ceph-tmpe_krhtt8:/etc/ceph/ceph.conf:z -v
>     /tmp/ceph-tmp_47jsxdp:/var/lib/ceph/bootstrap-osd/ceph.keyring:z
>
> quay.io/ceph/ceph@sha256:257b3f5140c11b51fd710ffdad6213ed53d74146f464a51717262d156daef553
>     lvm batch --no-auto /dev/vg_osd/lvm_osd --yes --no-systemd
>
> I had a look around the Net and couldn't find anything relevant. This
> post (https://github.com/rook/rook/issues/4967) talks about a similar
> issue using Rook, but I'm not using Rook but cephadm.
>
> Any help in resolving this (or confirming it is a bug) would be greatly
> appreciated - thanks in advance.
>
> Cheers
>
> Dulux-Oz
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux