Hi- after the upgrade to 16.2.6, I am now seeing this error: 9/20/21 10:45:00 AM[ERR]cephadm exited with an error code: 1, stderr:Inferring config /var/lib/ceph/fe3a7cb0-69ca-11eb-8d45-c86000d08867/mon.rhel1.robeckert.us/config ERROR: [Errno 2] No such file or directory: '/var/lib/ceph/fe3a7cb0-69ca-11eb-8d45-c86000d08867/mon.rhel1.robeckert.us/config' Traceback (most recent call last): File "/usr/share/ceph/mgr/cephadm/serve.py", line 1366, in _remote_connection yield (conn, connr) File "/usr/share/ceph/mgr/cephadm/serve.py", line 1263, in _run_cephadm code, '\n'.join(err))) orchestrator._interface.OrchestratorError: cephadm exited with an error code: 1, stderr:Inferring config /var/lib/ceph/fe3a7cb0-69ca-11eb-8d45-c86000d08867/mon.rhel1.robeckert.us/config ERROR: [Errno 2] No such file or directory: '/var/lib/ceph/fe3a7cb0-69ca-11eb-8d45-c86000d08867/mon.rhel1.robeckert.us/config' The rhel1 server has a monitor under /var/lib/ceph/fe3a7cb0-69ca-11eb-8d45-c86000d08867/mon.rhel1 , and it is up and active. If I copy the /var/lib/ceph/fe3a7cb0-69ca-11eb-8d45-c86000d08867/mon.rhel1 to /var/lib/ceph/fe3a7cb0-69ca-11eb-8d45-c86000d08867/mon.rhel1.robeckert.us the error clears, then cephadm removes the folder with the domain name, and the error starts showing up in the log again. After a few minutes, I get the all clear: 9/20/21 11:00:00 AM[INF]overall HEALTH_OK 9/20/21 10:58:38 AM[INF]Removing key for mon. 9/20/21 10:58:37 AM[INF]Removing daemon mon.rhel1.robeckert.us from rhel1.robeckert.us 9/20/21 10:58:37 AM[INF]Removing monitor rhel1.robeckert.us from monmap... 9/20/21 10:58:37 AM[INF]Safe to remove mon.rhel1.robeckert.us: not in monmap (['rhel1', 'story', 'cube']) 9/20/21 10:52:21 AM[INF]Cluster is now healthy 9/20/21 10:52:21 AM[INF]Health check cleared: CEPHADM_REFRESH_FAILED (was: failed to probe daemons or devices) 9/20/21 10:51:15 AM I checked all of the configurations and can't find any reason it wants the monitor with the domain. But then the errors start up again - I haven't found any messages before they start up, I am going to monitor more closely. This doesn't seem to affect any functionality, just lots of messages in the log. Thanks, Rob _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx