It's not that simple.... see http://tracker.ceph.com/issues/21672 For the 12.2.8 to 12.2.10 upgrade it seems the selinux module was updated -- so the rpms restart the ceph.target. What's worse is that this seems to happen before all the new updated files are in place. Our 12.2.8 to 12.2.10 upgrade procedure is: systemctl stop ceph.target yum update systemctl start ceph.target -- Dan On Mon, Dec 3, 2018 at 12:42 PM Paul Emmerich <paul.emmerich@xxxxxxxx> wrote: > > Upgrading Ceph packages does not restart the services -- exactly for > this reason. > > This means there's something broken with your yum setup if the > services are restarted when only installing the new version. > > > Paul > > -- > Paul Emmerich > > Looking for help with your Ceph cluster? Contact us at https://croit.io > > croit GmbH > Freseniusstr. 31h > 81247 München > www.croit.io > Tel: +49 89 1896585 90 > > Am Mo., 3. Dez. 2018 um 11:56 Uhr schrieb Jan Kasprzak <kas@xxxxxxxxxx>: > > > > Hello, ceph users, > > > > I have a small(-ish) Ceph cluster, where there are osds on each host, > > and in addition to that, there are mons on the first three hosts. > > Is it possible to upgrade the cluster to Luminous without service > > interruption? > > > > I have tested that when I run "yum --enablerepo Ceph update" on a > > mon host, the osds on that host remain down until all three mons > > are upgraded to Luminous. Is it possible to upgrade ceph-mon only, > > and keep ceph-osd running the old version (Jewel in my case) as long > > as possible? It seems RPM dependencies forbid this, but with --nodeps > > it could be done. > > > > Is there a supported way how to upgrade host running both mon and osd > > to Luminous? > > > > Thanks, > > > > -Yenya > > > > -- > > | Jan "Yenya" Kasprzak <kas at {fi.muni.cz - work | yenya.net - private}> | > > | http://www.fi.muni.cz/~kas/ GPG: 4096R/A45477D5 | > > This is the world we live in: the way to deal with computers is to google > > the symptoms, and hope that you don't have to watch a video. --P. Zaitcev > > _______________________________________________ > > ceph-users mailing list > > ceph-users@xxxxxxxxxxxxxx > > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com