Hello all, I've been having some real troubles in getting cephadm to apply some very minor point release updates cleanly, twice now applying the point update of 15.2.6 -> 15.2.7 and 15.2.7 to 15.2.8 has gotten blocked somewhere and ended up making no progress, requiring digging deep into internals to unblock things. In the most recent attempt of 15.2.7 -> 15.2.8, the Orchestrator cleanly replaced Mon and MGR containers in the first steps, but when it came to replacing Crash daemon containers the running 15.2.7 Crash container was purged but container update operations then seem to get blocked on trying to start it again on the older image, leading to an infinite loop of Podman trying to start a non-existent container in logging; https://pastebin.com/9zdMs1XU Forcing an `ceph orch daemon rm` of the Crash daemon affected for the host just repeats the loop again. I'd then tried removing the Crash service and all daemons through the Orchestrator API next. This purged all running Crash containers from all hosts, and then re-applyed a service spec to restart them, hopefully on the new image. The Orchestrator removal of the Crash containers seems to have left container state dangling on hosts however, as now we see the same issue of Crash containers not starting on *every* host in the cluster due to left-over container state ; https://pastebin.com/tjaegxqg At this point I'm not certain if Podman (v1.6.4 EPEL, CentOS7.9) or Orchestrator is to blame for leaving this state dangling and blocking new container creation, but it's proving a real problem in applying even simple minor version point updates. Has anyone else been seeing similar behaviour in applying minor version updates via cephadm+Orchestrator? Are there any good workarounds to clean up the dangling container state? -- ******************* Paul Browne Research Computing Platforms University Information Services Roger Needham Building JJ Thompson Avenue University of Cambridge Cambridge United Kingdom E-Mail: pfb29@xxxxxxxxx Tel: 0044-1223-746548 ******************* _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx