On 12/19/2016 09:58 PM, Ken Dreyer wrote: > I looked into this again on a Trusty VM today. I set up a single > mon+osd cluster on v10.2.3, with the following: > > # status ceph-osd id=0 > ceph-osd (ceph/0) start/running, process 1301 > > #ceph daemon osd.0 version > {"version":"10.2.3"} > > I ran "apt-get upgrade" to get go 10.2.3 -> 10.2.5, and the OSD PID > (1301) and version from the admin socket (v10.2.3) remained the same. In which repository do you have retrieved the 10.2.3 version of ceph? I could make a test too. > Could something else be restarting the daemons in your case? I use Puppet to manage my hosts but "ceph" services are all *un*managed by Puppet, I'm sure (and the run is weekly only and I have noticed the behavior in my 5 nodes). Management of the "ceph" services is completely manual in my case. _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com