cephadm rollout behavior and post adoption issues

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi everybody,

I'm fairly new to cephadm. I'm trying to get some hands-on experience.
I have a test cluster consisting of:
3 Monitor/Manager nodes.
6 OSD nodes
3 RGW nodes
Pacific version
containerized deployment using:
quay.io/ceph/daemon:v6.0.11-stable-6.0-pacific-centos-stream8

I've adopted this cluster using the cephadm adopt playbook in the
ceph-ansible project.
infrastructure-playbooks/cephadm-adopt.yml

I set the default container_image to a certain image and I checked the
configuration database and it was set to the correct image.
The adoption process for monitor, manager, and RGW daemons (redeployment in
the case of RGWs) used the value for container_image, but the OSDs used the
old image they were using before they were adopted.
container_image value: quay.io/ceph/ceph:v16.2.13
I was wondering what could be the reason for the image not changing for
OSDs.


After the adoption, I wanted to mount the client.admin.keyring to monitor
daemons.
I used the extra_container_args service specification and added the volume.
After I applied the mon service spec file to the cluster using ceph orch
apply -i <filename>, I witnessed something odd.
I read that cephadm has a rollout-like behavior for redeploying daemons.
But the delay between the redeploys was 4 seconds and there was a period
when all the monitor daemons were in the starting state.

Is there any mechanism to increase the delay between the redeployments?
Is there a way to enforce the redeployments to only 1 daemon?

I've provided some logs, at the end of the message, that I think can be
helpful.

Thanks in advance.
Regards
Nima AbolhassanBeigi

NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID
mon.stg-pacific-mon1 stg-pacific-mon1 starting - - - 6144M <unknown> <unknow
n>
mon.stg-pacific-mon2 stg-pacific-mon2 starting - - - 6144M <unknown> <unknow
n>
mon.stg-pacific-mon3 stg-pacific-mon3 starting - - - 6144M <unknown> <unknon
w>


Every 1.0s: ceph orch ps --service_name=mon --refresh stg-pacific-mon1: Sun
Dec 29 15:52:15 2024

NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID
mon.stg-pacific-mon1 stg-pacific-mon1 starting - - - 6144M <unknown> <unknow
n>
mon.stg-pacific-mon2 stg-pacific-mon2 starting - - - 6144M <unknown> <unknow
n>
mon.stg-pacific-mon3 stg-pacific-mon3 starting - - - 6144M <unknown> <unknow
n>

Every 1.0s: ceph orch ps --service_name=mon --refresh stg-pacific-mon1: Sun
Dec 29 15:52:41 2024

NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID
CONTAINER ID
mon.stg-pacific-mon1 stg-pacific-mon1 running (37s) 2s ago 6d 30.2M 6144M
16.2.13 9e29efbb67d5 367a7f3bac31
mon.stg-pacific-mon2 stg-pacific-mon2 running (33s) 2s ago 11d 30.5M 6144M
16.2.13 9e29efbb67d5 43828e2453da
mon.stg-pacific-mon3 stg-pacific-mon3 running (29s) 2s ago 11d 24.8M 6144M
16.2.13 9e29efbb67d5 d698a809370f
2024-12-29T12:18:16.974710+0000 mgr.stg-pacific-mon1 (mgr.173193) 322363 :
cephadm [INF] Saving service mon spec with placement count:7;label:mons
2024-12-29T12:19:44.140240+0000 mgr.stg-pacific-mon1 (mgr.173193) 322545 :
cephadm [INF] Saving service mon spec with placement count:3;label:mons
2024-12-29T12:19:47.497916+0000 mgr.stg-pacific-mon1 (mgr.173193) 322555 :
cephadm [INF] Redeploying mon.stg-pacific-mon1, (container cli args changed)
. . .
2024-12-29T12:19:47.507683+0000 mgr.stg-pacific-mon1 (mgr.173193) 322556 :
cephadm [INF] Deploying daemon mon.stg-pacific-mon1 on stg-pacific-mon1
2024-12-29T12:19:50.970499+0000 mgr.stg-pacific-mon1 (mgr.173193) 322562 :
cephadm [INF] Redeploying mon.stg-pacific-mon2, (container cli args changed)
. . .
2024-12-29T12:19:50.982244+0000 mgr.stg-pacific-mon1 (mgr.173193) 322563 :
cephadm [INF] Deploying daemon mon.stg-pacific-mon2 on stg-pacific-mon2
2024-12-29T12:19:55.774118+0000 mgr.stg-pacific-mon1 (mgr.173193) 322573 :
cephadm [INF] Redeploying mon.stg-pacific-mon3, (container cli args changed)
. . .
2024-12-29T12:19:55.782550+0000 mgr.stg-pacific-mon1 (mgr.173193) 322574 :
cephadm [INF] Deploying daemon mon.stg-pacific-mon3 on stg-pacific-mon3

2024-12-29T12:21:56.620507+0000 mgr.stg-pacific-mon1 (mgr.173193) 322830 :
cephadm [INF] Saving service mon spec with placement count:3;label:mons
2024-12-29T12:22:02.198770+0000 mgr.stg-pacific-mon1 (mgr.173193) 322844 :
cephadm [INF] Redeploying mon.stg-pacific-mon1, (container cli args changed)
. . .
2024-12-29T12:22:02.210484+0000 mgr.stg-pacific-mon1 (mgr.173193) 322845 :
cephadm [INF] Deploying daemon mon.stg-pacific-mon1 on stg-pacific-mon1
2024-12-29T12:22:05.654926+0000 mgr.stg-pacific-mon1 (mgr.173193) 322851 :
cephadm [INF] Redeploying mon.stg-pacific-mon2, (container cli args changed)
. . .
2024-12-29T12:22:05.667782+0000 mgr.stg-pacific-mon1 (mgr.173193) 322852 :
cephadm [INF] Deploying daemon mon.stg-pacific-mon2 on stg-pacific-mon2
2024-12-29T12:22:10.332756+0000 mgr.stg-pacific-mon1 (mgr.173193) 322862 :
cephadm [INF] Redeploying mon.stg-pacific-mon3, (container cli args changed)
. . .
2024-12-29T12:22:10.344298+0000 mgr.stg-pacific-mon1 (mgr.173193) 322863 :
cephadm [INF] Deploying daemon mon.stg-pacific-mon3 on stg-pacific-mon3
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux