Re: cephadm custom crush location hooks

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




I've found the crush location hook script code to be problematic in the containerized/cephadm world.

Our workaround is to place the script in a common place on each OSD node, such as /etc/crush/crushhook.sh, and then make a link from /rootfs -> /, and set the configuration value so that the path to the hook script starts with /rootfs.  The container that the OSDs run in has access to /rootfs and this hack allows them to all view the crush script without having to manually modify unit files.

For example:

  1.
put crushhook script on the host OS in /etc/crush/crushhook.sh
  2.
make a link on the host os:   $ cd /; sudo ln -s / /rootfs
  3.
ceph config set osd crush_location_hook /rootfs/etc/crush/crushhook.sh


The containers see "/rootfs" and will then be able to access your script.  Be aware though that if your script requires any sort of elevated access, it may fail because the hook runs as ceph:ceph in a minimal container so not all functions are available.  I had to add lots of debug output and logging in mine (it's rather complicated) to figure out what was going on when it was running.

I would love to see the "crush_location_hook" script be something that can be stored in the config entirely instead of as a link, similar to how the ssl certificates for RGW or the dashboard are stored (ceph config-key set ...).   The current situation is not ideal.




________________________________
From: Eugen Block <eblock@xxxxxx>
Sent: Thursday, May 2, 2024 10:23 AM
To: ceph-users@xxxxxxx <ceph-users@xxxxxxx>
Subject:  cephadm custom crush location hooks

Hi,

we've been using custom crush location hooks for some OSDs [1] for
years. Since we moved to cephadm, we always have to manually edit the
unit.run file of those OSDs because the path to the script is not
mapped into the containers. I don't want to define custom location
hooks for all OSDs globally in the OSD spec, even if those are limited
to two hosts only in our case. But I'm not aware of a method to target
only specific OSDs to have some files mapped into the container [2].
Is my assumption correct that we'll have to live with the manual
intervention until we reorganize our osd tree? Or did I miss something?

Thanks!
Eugen

[1]
https://docs.ceph.com/en/latest/rados/operations/crush-map/#custom-location-hooks
[2]
https://docs.ceph.com/en/latest/cephadm/services/#mounting-files-with-extra-container-arguments
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux