Re: Upgrade to Luminous (mon+osd)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Dec 3, 2018 at 5:00 PM Jan Kasprzak <kas@xxxxxxxxxx> wrote:
>
> Dan van der Ster wrote:
> : It's not that simple.... see http://tracker.ceph.com/issues/21672
> :
> : For the 12.2.8 to 12.2.10 upgrade it seems the selinux module was
> : updated -- so the rpms restart the ceph.target.
> : What's worse is that this seems to happen before all the new updated
> : files are in place.
> :
> : Our 12.2.8 to 12.2.10 upgrade procedure is:
> :
> : systemctl stop ceph.target
> : yum update
> : systemctl start ceph.target
>
>         Yes, this looks reasonable. Except that when upgrading
> from Jewel, even after the restart the OSDs do not work until
> _all_ mons are upgraded. So effectively if a PG happens to be placed
> on the mon hosts only, there will be service outage during upgrade
> from Jewel.
>
>         So I guess the upgrade procedure described here:
>
> http://docs.ceph.com/docs/master/releases/luminous/#upgrade-from-jewel-or-kraken
>
> is misleading - the mons and osds get restarted anyway by the package
> upgrade itself. The user should be warned that for this reason the package
> upgrades should be run sequentially. And that the upgrade is not possible
> without service outage, when there are OSDs on the mon hosts and when
> the cluster is running under SELinux.

Note that ceph-selinux will only restart ceph.target if selinux is enabled.

So probably you could set /etc/selinux/config ... SELINUX=disabled,
reboot, then upgrade the rpms and restart the daemons selectively.

And BTW, setenforce 0 apparently doesn't disable enough of selinux --
you really do need to reboot.

# setenforce 0
# /usr/sbin/selinuxenabled
# echo $?
0

-- dan

>
>         Also, there is another important thing omitted by the above upgrade
> procedure: After "ceph osd require-osd-release luminous"
> I have got HEALTH_WARN saying "application not enabled on X pool(s)".
> I have fixed this by running the following scriptlet:
>
> ceph osd pool ls | while read pool; do ceph osd pool application enable $pool rbd; done
>
> (yes, all of my pools are used for rbd for now). Maybe this should be fixed
> in the release notes as well. Thanks,
>
> -Yenya
>
> : On Mon, Dec 3, 2018 at 12:42 PM Paul Emmerich <paul.emmerich@xxxxxxxx> wrote:
> : >
> : > Upgrading Ceph packages does not restart the services -- exactly for
> : > this reason.
> : >
> : > This means there's something broken with your yum setup if the
> : > services are restarted when only installing the new version.
> : >
> : >
> : > Paul
> : >
> : > --
> : > Paul Emmerich
> : >
> : > Looking for help with your Ceph cluster? Contact us at https://croit.io
> : >
> : > croit GmbH
> : > Freseniusstr. 31h
> : > 81247 München
> : > www.croit.io
> : > Tel: +49 89 1896585 90
> : >
> : > Am Mo., 3. Dez. 2018 um 11:56 Uhr schrieb Jan Kasprzak <kas@xxxxxxxxxx>:
> : > >
> : > >         Hello, ceph users,
> : > >
> : > > I have a small(-ish) Ceph cluster, where there are osds on each host,
> : > > and in addition to that, there are mons on the first three hosts.
> : > > Is it possible to upgrade the cluster to Luminous without service
> : > > interruption?
> : > >
> : > > I have tested that when I run "yum --enablerepo Ceph update" on a
> : > > mon host, the osds on that host remain down until all three mons
> : > > are upgraded to Luminous. Is it possible to upgrade ceph-mon only,
> : > > and keep ceph-osd running the old version (Jewel in my case) as long
> : > > as possible? It seems RPM dependencies forbid this, but with --nodeps
> : > > it could be done.
> : > >
> : > > Is there a supported way how to upgrade host running both mon and osd
> : > > to Luminous?
> : > >
> : > > Thanks,
> : > >
> : > > -Yenya
> : > >
> : > > --
> : > > | Jan "Yenya" Kasprzak <kas at {fi.muni.cz - work | yenya.net - private}> |
> : > > | http://www.fi.muni.cz/~kas/                         GPG: 4096R/A45477D5 |
> : > >  This is the world we live in: the way to deal with computers is to google
> : > >  the symptoms, and hope that you don't have to watch a video. --P. Zaitcev
> : > > _______________________________________________
> : > > ceph-users mailing list
> : > > ceph-users@xxxxxxxxxxxxxx
> : > > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> : > _______________________________________________
> : > ceph-users mailing list
> : > ceph-users@xxxxxxxxxxxxxx
> : > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
> --
> | Jan "Yenya" Kasprzak <kas at {fi.muni.cz - work | yenya.net - private}> |
> | http://www.fi.muni.cz/~kas/                         GPG: 4096R/A45477D5 |
>  This is the world we live in: the way to deal with computers is to google
>  the symptoms, and hope that you don't have to watch a video. --P. Zaitcev
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux