Re: Automanage block devices

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hey,

/usr/sbin/ceph-volume ... lvm batch --no-auto /dev/rbd0 
You want to add an OSD using rbd0?

To map a block device, just use rbd map ( https://docs.ceph.com/en/quincy/man/8/rbdmap/ ) 

Étienne

> -----Original Message-----
> From: Dominique Ramaekers <dominique.ramaekers@xxxxxxxxxx>
> Sent: lundi 29 août 2022 12:32
> To: ceph-users@xxxxxxx
> Subject:  Automanage block devices
> 
> [Some people who received this message don't often get email from
> dominique.ramaekers@xxxxxxxxxx. Learn why this is important at
> https://aka.ms/LearnAboutSenderIdentification ]
> 
> Hi,
> 
> I really like the behavior of ceph to auto-manage block devices. But I get ceph
> status warnings if I map an image to a /dev/rbd
> 
> Some log output:
> Aug 29 11:57:34 hvs002 bash[465970]: Non-zero exit code 2 from
> /usr/bin/docker run --rm --ipc=host --stop-signal=SIGTERM --net=host --
> entrypoint /usr/sbin/ceph-volume --privileged --group-add=disk --init -e
> CONTAINER_IMAGE=quay.io/ceph/ceph@sha256:43f6e905f3e34abe4adbc90
> 42b9d6f6b625dee8fa8d93c2bae53fa9b61c3df1a -e NODE_NAME=hvs002 -e
> CEPH_USE_RANDOM_NONCE=1 -e CEPH_VOLUME_OSDSPEC_AFFINITY=all-
> available-devices -e CEPH_VOLUME_SKIP_RESTORECON=yes -e
> CEPH_VOLUME_DEBUG=1 -v /var/run/ceph/dd4b0610-b4d2-11ec-bb58-
> d1b32ae31585:/var/run/ceph:z -v /var/log/ceph/dd4b0610-b4d2-11ec-bb58-
> d1b32ae31585:/var/log/ceph:z -v /var/lib/ceph/dd4b0610-b4d2-11ec-bb58-
> d1b32ae31585/crash:/var/lib/ceph/crash:z -v /dev:/dev -v
> /run/udev:/run/udev -v /sys:/sys -v /run/lvm:/run/lvm -v
> /run/lock/lvm:/run/lock/lvm -v /:/rootfs -v /tmp/ceph-
> tmpke1ihnc_:/etc/ceph/ceph.conf:z -v /tmp/ceph-
> tmpaqbxw8ga:/var/lib/ceph/bootstrap-osd/ceph.keyring:z
> quay.io/ceph/ceph@sha256:43f6e905f3e34abe4adbc9042b9d6f6b625dee8fa
> 8d93c2bae
>  53fa9b61c3df1a lvm batch --no-auto /dev/rbd0 --yes --no-systemd Aug 29
> 11:57:34 hvs002 bash[465970]: /usr/bin/docker: stderr  stderr: lsblk:
> /dev/rbd0: not a block device
> 
> Aug 29 11:57:34 hvs002 bash[465970]: cluster 2022-08-
> 29T09:57:33.973654+0000 mon.hvs001 (mon.0) 34133 : cluster [WRN] Health
> check failed: Failed to apply 1 service(s): osd.all-available-devices
> (CEPHADM_APPLY_SPEC_FAIL)
> 
> If I map a image to a rdb, the automanage feature want to add it as an osd. It
> fails (as it apparently isn't detected as a block device), so I guess my images
> are untouched, but still I worry because I can't find a lot of information about
> these warnings.
> 
> Do I risk a conflict between my operations on a mapped rbd image/device?
> Wil at some point ceph alter my image unintentionally?
> 
> Do I risk ceph to add such an image as an osd?
> 
> I can disable the managed feature of the osd-management, but then I lose
> automatic functions of ceph. Is there a way to tell ceph to exclude /dev/rdb*
> devices from the autodetect/automanage?
> 
> Greetings,
> 
> Dominique.
> 
> 
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email
> to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux