On 6/13/22 07:39, farhad kh wrote:
i upgraded my cluster to 17.2 and locked process upgrade i have error [root@ceph2-node-01 ~]# ceph -s cluster: id: 151b48f2-fa98-11eb-b7c4-000c29fa2c84 health: HEALTH_WARN Reduced data availability: 32 pgs inactive Degraded data redundancy: 32 pgs undersized services: mon: 3 daemons, quorum ceph2-node-03,ceph2-node-02,ceph2-node-01 (age 4h) mgr: ceph2-node-02.mjagnd(active, since 11h), standbys: ceph2-node-01.hgrjgo osd: 12 osds: 12 up (since 43m), 12 in (since 21h) data: pools: 1 pools, 32 pgs objects: 0 objects, 0 B usage: 434 MiB used, 180 GiB / 180 GiB avail pgs: 100.000% pgs not active 32 undersized+peered
^^ Nowadays the mgr is a critical component. Especially in cephadm deployments. Probably your cluster is fine, but the manager is not. At least, that is my experience when 100% pgs not active is reported.
What does a "ceph versions" give? So we can chech wat daemons have and what daemons have not been upgraded (yet).
The cluster is not progressing with the update as it is in WARN state. And possibly because the manager is not working correctly.
Disk One has not yet been upgraded to a new version, and the upgrade process has stopped altogether How can I solve this problem? What is the cause?
You might be able to stop the active manager and let the standby take over and see if that improves things.
Gr. Stefan _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx