Sorry, I missed the "client" entity:
host01:~ # ceph config set client container_image
my-registry:ceph/ceph:v16.2.13.66
host01:~ # ceph orch redeploy my-rgw
Now I have mix versions:
host01:~ # ceph versions -f json | jq '.rgw'
{
"ceph version 16.2.13-66-g54799ee0666
(54799ee06669271880ee5fc715f99202002aa371) pacific (stable)": 2
}
host01:~ # ceph versions -f json | jq '.mon'
{
"ceph version 17.2.7 (b12291d110049b2f35e32e0de30d70e9a4c060d2)
quincy (stable)": 3
}
But this is just a test cluster, I haven't verified the functionality
(I don't have a any clients connected).
Zitat von Eugen Block <eblock@xxxxxx>:
Hi,
I don't see a reason why Quincy rgw daemons shouldn't work with a
Reef cluster. It would basically mean that you have a staggered
upgrade [1] running and didn't upgrade RGWs yet. It should also work
to just downgrade them, either by providing a different default
image, then redeploy rgw service. I don't see an option to specify a
different image for rgw only as you can with osd, mon, mgr, mds
(ceph config set mon container_image <my_image>). Or if it's just
temporarily anyway, you could edit the unit.run file directly and
restart the daemon (/var/lib/ceph/{FSID}/rgw.{RGW}/unit.run) until
you find out the root cause. Changing the default container_image
globally wouldn't be my preferred choice since it could mess up
other daemons if any (re)deployment of failed services is necessary.
Regards,
Eugen
[1] https://docs.ceph.com/en/latest/cephadm/upgrade/#upgrading-ceph
Zitat von Iain Stott <Iain.Stott@xxxxxxx>:
Hi,
We have recently upgraded one of our clusters from Quincy 17.2.6 to
Reef 18.2.1, since then we have had 3 instances of our RGWs stop
processing requests. We have 3 hosts that run a single instance of
RGW on each, and all 3 just seem to stop processing requests at the
same time causing our storage to become unavailable. A restart or
redeploy of the RGW service brings them back ok. The cluster was
deployed using ceph ansible, but since we have adopted it to
cephadm which is how the upgrade was performed.
We have enabled debug logging as there was nothing out of the
ordinary in normal logs and are currently sifting through them from
the last crash.
We are just wondering if it possible to run Quincy RGWs instead of
Reef as we didn't have this issue prior to the upgrade?
We have 3 clusters in a multisite setup, we are holding off on
upgrading the other 2 clusters due to this issue.
Thanks
Iain
Iain Stott
OpenStack Engineer
Iain.Stott@xxxxxxx
[THG Ingenuity Logo]<https://www.thg.com>
www.thg.com<https://www.thg.com/>
[LinkedIn]<https://www.linkedin.com/company/thgplc/?originalSubdomain=uk>
[Instagram] <https://www.instagram.com/thg> [X]
<https://twitter.com/thgplc?lang=en>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx