Re: v16.2.12 Pacific (hot-fix) released

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Wesley,

I can only answer your second question and give an opinion on the last one!

- Yes the OSD activation problem (in cephadm clusters only) was introduce by an unfortunate change (indentation problem in Python code) in 16.2.11. The issue doesn't exist in 16.2.10 and is one of the fixed issue in 16.2.12 (with the current caveat for this version).

- Because of 16.2.11 issue and the missing validation for some of the fixes in current 16.2.12, I'd say that if you want to upgrade to Pacific, it is advised to use 16.2.10. At least, it is what we done succesfully on 2 cephadm-based Ceph clusters.

Cheers,

Michel

Le 24/04/2023 à 15:46, Wesley Dillingham a écrit :
A few questions:

- Will the 16.2.12 packages be "corrected" and reuploaded to the ceph.com
mirror? or will 16.2.13 become what 16.2.12 was supposed to be?

- Was the osd activation regression introduced in 16.2.11 (or does 16.2.10
have it as well)?

- Were the hotfxes in 16.2.12 just related to perf / time-to-activation or
was there a total failure to activate / other breaking issue?

- Which version of Pacific is recommended at this time?

Thank you very much.

Respectfully,

*Wes Dillingham*
wes@xxxxxxxxxxxxxxxxx
LinkedIn <http://www.linkedin.com/in/wesleydillingham>


On Mon, Apr 24, 2023 at 3:16 AM Simon Oosthoek <s.oosthoek@xxxxxxxxxxxxx>
wrote:

Dear List

we upgraded to 16.2.12 on April 17th, since then we've seen some
unexplained downed osd services in our cluster (264 osds), is there any
risk of data loss, if so, would it be possible to downgrade or is a fix
expected soon? if so, when? ;-)

FYI, we are running a cluster without cephadm, installed from packages.

Cheers

/Simon

On 23/04/2023 03:03, Yuri Weinstein wrote:
We are writing to inform you that Pacific v16.2.12, released on April
14th, has many unintended commits in the changelog than listed in the
release notes [1].

As these extra commits are not fully tested, we request that all users
please refrain from upgrading to v16.2.12 at this time. The current
v16.2.12 will be QE validated and released as soon as possible.

v16.2.12 was a hotfix release meant to resolve several performance
flaws in ceph-volume, particularly during osd activation. The extra
commits target v16.2.13.

We apologize for the inconvenience. Please reach out to the mailing
list with any questions.

[1]
https://urldefense.com/v3/__https://ceph.io/en/news/blog/2023/v16-2-12-pacific-released/__;!!HJOPV4FYYWzcc1jazlU!-OuIFoOFfOQDsz4abuBV7neIEO7j0XkOM1YBEIhz_IYTdUAIMuO9upMHj_R8bAFFrWQ8OBHwS6x4I5-fNaPJ0M8$
On Fri, Apr 14, 2023 at 9:42 AM Yuri Weinstein <yweinste@xxxxxxxxxx>
wrote:
We're happy to announce the 12th hot-fix release in the Pacific series.


https://urldefense.com/v3/__https://ceph.io/en/news/blog/2023/v16-2-12-pacific-released/__;!!HJOPV4FYYWzcc1jazlU!-OuIFoOFfOQDsz4abuBV7neIEO7j0XkOM1YBEIhz_IYTdUAIMuO9upMHj_R8bAFFrWQ8OBHwS6x4I5-fNaPJ0M8$
Notable Changes
---------------
This is a hotfix release that resolves several performance flaws in
ceph-volume,
particularly during osd activation (
https://urldefense.com/v3/__https://tracker.ceph.com/issues/57627__;!!HJOPV4FYYWzcc1jazlU!-OuIFoOFfOQDsz4abuBV7neIEO7j0XkOM1YBEIhz_IYTdUAIMuO9upMHj_R8bAFFrWQ8OBHwS6x4I5-fg0yeu7U$
)
Getting Ceph

------------
* Git at git://github.com/ceph/ceph.git
* Tarball at
https://urldefense.com/v3/__https://download.ceph.com/tarballs/ceph-16.2.12.tar.gz__;!!HJOPV4FYYWzcc1jazlU!-OuIFoOFfOQDsz4abuBV7neIEO7j0XkOM1YBEIhz_IYTdUAIMuO9upMHj_R8bAFFrWQ8OBHwS6x4I5-fBEJl5p4$
* Containers at
https://urldefense.com/v3/__https://quay.io/repository/ceph/ceph__;!!HJOPV4FYYWzcc1jazlU!-OuIFoOFfOQDsz4abuBV7neIEO7j0XkOM1YBEIhz_IYTdUAIMuO9upMHj_R8bAFFrWQ8OBHwS6x4I5-fc7HeSms$
* For packages, see
https://urldefense.com/v3/__https://docs.ceph.com/en/latest/install/get-packages/__;!!HJOPV4FYYWzcc1jazlU!-OuIFoOFfOQDsz4abuBV7neIEO7j0XkOM1YBEIhz_IYTdUAIMuO9upMHj_R8bAFFrWQ8OBHwS6x4I5-fAKdWZK4$
* Release git sha1: 5a2d516ce4b134bfafc80c4274532ac0d56fc1e2
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux