Re: Octopus: conversion from ceph-ansible to Cephadm causes unexpected 15.2.15→.13 downgrade for MDSs and RGWs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 16/12/2021 16:07, Robert Sander wrote:
On 16.12.21 14:58, Florian Haas wrote:

Yes, we are aware that that's how you specify the image *on upgrade.* The question was about how to avoid the silent *downgrade* of RGWs and MDSs during ceph orch apply, so that a subsequent point-release upgrade (within Octopus) for those services is no longer necessary.

I think the default in Octopus is an image named "ceph/ceph:v15" or similar. When deploying RGWs and MDSs after the adoption cephadm pulls that image from Docker hub resulting in the last version
there: 15.2.13.

Yes, https://ceph.io/en/news/blog/2021/v15-2-15-octopus-released/ does
mention that the default container location used by Cephadm has moved to
quay.io.

So for a 15.2.5 cluster that's being converted to Cephadm, Cephadm
should use quay.io from the get-go. And it does, but only for "cephadm
adopt". "ceph orch" apparently uses the old location, which appears to
cause the unexpected MDS and RGW downgrade. And this is the bit I'm
confused by.

When you do an "ceph orch upgrade --image
quay.io/ceph/ceph:v15.2.15" before deploying new RGWs and MDSs you
set the new default image for cephadm. No "real" upgrade will be
performered as the adopted containers already are running on this
image.

Yes, that's an idea. But it's also *very* counterintuitive, don't you agree?

Cheers,
Florian
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux