Re: v16.2.6 Pacific released

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

Does the 16.2.6 version fixed the following bug :

https://github.com/ceph/ceph/pull/42690

?

It's not listed in the changelog.



Message: 3
Date: Thu, 16 Sep 2021 15:48:42 -0400
From: David Galloway <dgallowa@xxxxxxxxxx>
Subject:  v16.2.6 Pacific released
To: ceph-announce@xxxxxxx, ceph-users@xxxxxxx, dev@xxxxxxx,
	ceph-maintainers@xxxxxxx
Message-ID: <1d402d62-5b3e-b62e-c68c-3fb2b30f1a02@xxxxxxxxxx>
Content-Type: text/plain; charset=utf-8

We're happy to announce the 6th backport release in the Pacific series.
We recommend users to update to this release. For a detailed release
notes with links & changelog please refer to the official blog entry at
https://ceph.io/en/news/blog/2021/v16-2-6-pacific-released

Notable Changes
---------------

* MGR: The pg_autoscaler has a new default 'scale-down' profile which
provides more performance from the start for new pools (for newly
created clusters). Existing clusters will retain the old behavior, now
called the 'scale-up' profile. For more details, see:
https://docs.ceph.com/en/latest/rados/operations/placement-groups/

* CephFS: the upgrade procedure for CephFS is now simpler. It is no
longer necessary to stop all MDS before upgrading the sole active MDS.
After disabling standby-replay, reducing max_mds to 1, and waiting for
the file systems to become stable (each fs with 1 active and 0 stopping
daemons), a rolling upgrade of all MDS daemons can be performed.

* Dashboard: now allows users to set up and display a custom message
(MOTD, warning, etc.) in a sticky banner at the top of the page. For
more details, see:
https://docs.ceph.com/en/pacific/mgr/dashboard/#message-of-the-day-motd

* Several fixes in BlueStore, including a fix for the deferred write
regression, which led to excessive RocksDB flushes and compactions.
Previously, when bluestore_prefer_deferred_size_hdd was equal to or more
than bluestore_max_blob_size_hdd (both set to 64K), all the data was
deferred, which led to increased consumption of the column family used
to store deferred writes in RocksDB. Now, the
bluestore_prefer_deferred_size parameter independently controls deferred
writes, and only writes smaller than this size use the deferred write path.

* The default value of osd_client_message_cap has been set to 256, to
provide better flow control by limiting maximum number of in-flight
client requests.

* PGs no longer show a active+clean+scrubbing+deep+repair state when
osd_scrub_auto_repair is set to true, for regular deep-scrubs with no
repair required.

* ceph-mgr-modules-core debian package does not recommend ceph-mgr-rook
anymore. As the latter depends on python3-numpy which cannot be imported
in different Python sub-interpreters multi-times if the version of
python3-numpy is older than 1.19. Since apt-get installs the Recommends
packages by default, ceph-mgr-rook was always installed along with
ceph-mgr debian package as an indirect dependency. If your workflow
depends on this behavior, you might want to install ceph-mgr-rook
separately.

* This is the first release built for Debian Bullseye.


Getting Ceph
------------
* Git at git://github.com/ceph/ceph.git
* Tarball at https://download.ceph.com/tarballs/ceph-16.2.6.tar.gz
* Containers at https://hub.docker.com/r/ceph/ceph/tags?name=v16.2.6
* For packages, see https://docs.ceph.com/docs/master/install/get-packages/
* Release git sha1: ee28fb57e47e9f88813e24bbf4c14496ca299d31


------------------------------

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux