Hi, I'm in the middle of upgrading a 12.2.8 cluster to 13.2.4 and I've noticed that during the Yum/RPM upgrade the OSDs are being restarted. Jan 15 11:24:25 xxxxx yum[2348259]: Updated: 2:ceph-base-13.2.4-0.el7.x86_64 Jan 15 11:24:47 xxxxx systemd[1]: Stopped target ceph target allowing to start/stop all ceph*@.service instances at once. Jan 15 11:24:47 xxxxx systemd[1]: Stopped target ceph target allowing to start/stop all ceph-osd@.service instances at once. Jan 15 11:24:47 xxxxx systemd[1]: Stopping Ceph object storage daemon osd.267... .... .... Jan 15 11:24:54 xxxxx systemd[1]: Started Ceph object storage daemon osd.143. Jan 15 11:24:54 xxxxx systemd[1]: Started Ceph object storage daemon osd.1156. Jan 15 11:24:54 xxxxx systemd[1]: Reached target ceph target allowing to start/stop all ceph-osd@.service instances at once. Jan 15 11:24:54 xxxxx systemd[1]: Reached target ceph target allowing to start/stop all ceph*@.service instances at once. Jan 15 11:24:54 xxxxx yum[2348259]: Updated: 2:ceph-selinux-13.2.4-0.el7.x86_64 Jan 15 11:24:59 xxxxx yum[2348259]: Updated: 2:ceph-osd-13.2.4-0.el7.x86_64 In /etc/sysconfig/ceph there is CEPH_AUTO_RESTART_ON_UPGRADE=no So this makes me wonder, what causes the OSDs to be restarted after the package upgrade as we are not allowing this restart. Checking cloud.spec.in in both the Luminous and Mimic branch I can't find a good reason why this is happening because it checks for 'CEPH_AUTO_RESTART_ON_UPGRADE' which isn't set to 'yes'. In addition, ceph.spec.in never restarts 'ceph.target' which is being restarted. Could it be that the SELinux upgrade initiates the restart of these daemons? CentOS Linux release 7.6.1810 (Core) Luminous 12.2.8 Mimic 13.2.4 Has anybody seen this before? Wido _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com