Hi Tobi,
I didn't know about that config option, but that did the trick!
Thank you!
Kenneth
On 26/02/2021 11:30, Tobias Fischer wrote:
Hi Kenneth,
check the config db which image is set:
ceph config dump
WHO MASK LEVEL OPTION VALUE RO
global basic container_image docker.io/ceph/ceph:v15.2.9 *
Probably you have v15 tag configured which means orchestrator will fetch latest v15 image - so as of today this would be v15.2.9.
So either you change the setting in the config DB or you can do it like this if you have v15 configured:
- log in to the host that is going to be added beforehand
- get you preferred image:
docker pull ceph/ceph:v15.2.6
- retag it
docker tag ceph/ceph:v15.2.6 ceph/ceph:v15
- remove the original image
docker rmi ceph/ceph:v15.2.6
- add host as usual
orchestrator will use the configured v15 image which on the new host corresponds to v15.2.6
hope it helps
best,
tobi
Am 26.02.2021 um 11:16 schrieb Kenneth Waegeman <kenneth.waegeman@xxxxxxxx>:
Hi all,
I am running a cluster managed by orchestrator/cephadm. I installed new host for OSDS yesterday, the osd daemons were automatically created using drivegroups service specs (https://docs.ceph.com/en/latest/cephadm/drivegroups/#drivegroups <https://docs.ceph.com/en/latest/cephadm/drivegroups/#drivegroups>) and they started with a 15.2.9 image, instead of 15.2.8 which all daemons of the cluster are running.
I did not yet run ceph orch upgrade to 15.2.9.
Is there a way to lock the version of OSDS/daemons created by orchestrator/cephadm?
Thanks!
Kenneth
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx