Re: ceph-osd processes restart during Luminous -> Mimic upgrade on CentOS 7

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 1/15/19 11:39 AM, Dan van der Ster wrote:
> Hi Wido,
> 
> `rpm -q --scripts ceph-selinux` will tell you why.
> 
> It was the same from 12.2.8 to 12.2.10: http://tracker.ceph.com/issues/21672
> 

Thanks for pointing it out!

> And the problem is worse than you described, because the daemons are
> even restarted before all the package files have been updated.
> 
> Our procedure on these upgrades is systemctl stop ceph.target; yum
> update; systemctl start ceph.target (or ceph-volume lvm activate
> --all).
> 

Yes, I was thinking about it as well. I'm just not used to having
daemons suddenly restart.

SELinux is set to Permissive mode anyway, so why restart the packages
while we are running in Permissive mode?

I'll update the ticket with some feedback about this as this is not what
I (and it seems other users) expect.

Wido

> Cheers, Dan
> 
> On Tue, Jan 15, 2019 at 11:33 AM Wido den Hollander <wido@xxxxxxxx> wrote:
>>
>> Hi,
>>
>> I'm in the middle of upgrading a 12.2.8 cluster to 13.2.4 and I've
>> noticed that during the Yum/RPM upgrade the OSDs are being restarted.
>>
>> Jan 15 11:24:25 xxxxx yum[2348259]: Updated: 2:ceph-base-13.2.4-0.el7.x86_64
>> Jan 15 11:24:47 xxxxx systemd[1]: Stopped target ceph target allowing to
>> start/stop all ceph*@.service instances at once.
>> Jan 15 11:24:47 xxxxx systemd[1]: Stopped target ceph target allowing to
>> start/stop all ceph-osd@.service instances at once.
>> Jan 15 11:24:47 xxxxx systemd[1]: Stopping Ceph object storage daemon
>> osd.267...
>> ....
>> ....
>> Jan 15 11:24:54 xxxxx systemd[1]: Started Ceph object storage daemon
>> osd.143.
>> Jan 15 11:24:54 xxxxx systemd[1]: Started Ceph object storage daemon
>> osd.1156.
>> Jan 15 11:24:54 xxxxx systemd[1]: Reached target ceph target allowing to
>> start/stop all ceph-osd@.service instances at once.
>> Jan 15 11:24:54 xxxxx systemd[1]: Reached target ceph target allowing to
>> start/stop all ceph*@.service instances at once.
>> Jan 15 11:24:54 xxxxx yum[2348259]: Updated:
>> 2:ceph-selinux-13.2.4-0.el7.x86_64
>> Jan 15 11:24:59 xxxxx yum[2348259]: Updated: 2:ceph-osd-13.2.4-0.el7.x86_64
>>
>> In /etc/sysconfig/ceph there is CEPH_AUTO_RESTART_ON_UPGRADE=no
>>
>> So this makes me wonder, what causes the OSDs to be restarted after the
>> package upgrade as we are not allowing this restart.
>>
>> Checking cloud.spec.in in both the Luminous and Mimic branch I can't
>> find a good reason why this is happening because it checks for
>> 'CEPH_AUTO_RESTART_ON_UPGRADE' which isn't set to 'yes'.
>>
>> In addition, ceph.spec.in never restarts 'ceph.target' which is being
>> restarted.
>>
>> Could it be that the SELinux upgrade initiates the restart of these daemons?
>>
>> CentOS Linux release 7.6.1810 (Core)
>> Luminous 12.2.8
>> Mimic 13.2.4
>>
>> Has anybody seen this before?
>>
>> Wido
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@xxxxxxxxxxxxxx
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux