Re: RGWs stop processing requests after upgrading to Reef

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I don't see a reason why Quincy rgw daemons shouldn't work with a Reef cluster. It would basically mean that you have a staggered upgrade [1] running and didn't upgrade RGWs yet. It should also work to just downgrade them, either by providing a different default image, then redeploy rgw service. I don't see an option to specify a different image for rgw only as you can with osd, mon, mgr, mds (ceph config set mon container_image <my_image>). Or if it's just temporarily anyway, you could edit the unit.run file directly and restart the daemon (/var/lib/ceph/{FSID}/rgw.{RGW}/unit.run) until you find out the root cause. Changing the default container_image globally wouldn't be my preferred choice since it could mess up other daemons if any (re)deployment of failed services is necessary.

Regards,
Eugen

[1] https://docs.ceph.com/en/latest/cephadm/upgrade/#upgrading-ceph

Zitat von Iain Stott <Iain.Stott@xxxxxxx>:

Hi,

We have recently upgraded one of our clusters from Quincy 17.2.6 to Reef 18.2.1, since then we have had 3 instances of our RGWs stop processing requests. We have 3 hosts that run a single instance of RGW on each, and all 3 just seem to stop processing requests at the same time causing our storage to become unavailable. A restart or redeploy of the RGW service brings them back ok. The cluster was deployed using ceph ansible, but since we have adopted it to cephadm which is how the upgrade was performed.

We have enabled debug logging as there was nothing out of the ordinary in normal logs and are currently sifting through them from the last crash.

We are just wondering if it possible to run Quincy RGWs instead of Reef as we didn't have this issue prior to the upgrade?

We have 3 clusters in a multisite setup, we are holding off on upgrading the other 2 clusters due to this issue.


Thanks
Iain

Iain Stott
OpenStack Engineer
Iain.Stott@xxxxxxx
[THG Ingenuity Logo]<https://www.thg.com>
www.thg.com<https://www.thg.com/>
[LinkedIn]<https://www.linkedin.com/company/thgplc/?originalSubdomain=uk> [Instagram] <https://www.instagram.com/thg> [X] <https://twitter.com/thgplc?lang=en>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux