v16.2.7 Pacific released

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



We're happy to announce the 7th backport release in the Pacific series.
We recommend all users upgrade to this release.

Notable Changes
---------------

* Critical bug in OMAP format upgrade is fixed. This could cause data
corruption (improperly formatted OMAP keys) after pre-Pacific cluster
upgrade if bluestore-quick-fix-on-mount parameter is set to true or
ceph-bluestore-tool's quick-fix/repair commands are invoked. Relevant
tracker: https://tracker.ceph.com/issues/53062.
bluestore-quick-fix-on-mount continues to be set to false, by default.

* MGR: The pg_autoscaler will use the 'scale-up' profile as the default
profile. 16.2.6 changed the default profile to 'scale-down' but we ran
into issues with the device_health_metrics pool consuming too many PGs,
which is not ideal for performance. So we will continue to use the
'scale-up' profile by default,  until we implement a limit on the number
of PGs default pools should consume, in combination with the
'scale-down' profile.

* Cephadm & Ceph Dashboard: NFS management has been completely reworked
to ensure that NFS exports are managed consistently across the different
Ceph components. Prior to this, there were 3 incompatible
implementations for configuring the NFS exports: Ceph-Ansible/OpenStack
Manila, Ceph Dashboard and 'mgr/nfs' module. With this release the
'mgr/nfs' way becomes the official interface, and the remaining
components (Cephadm and Ceph Dashboard) adhere to it. While this might
require manually migrating from the deprecated implementations, it will
simplify the user experience for those heavily relying on NFS exports.

* Dashboard: "Cluster Expansion Wizard". After the 'cephadm bootstrap'
step, users that log into the Ceph Dashboard will be presented with a
welcome screen. If they choose to follow the installation wizard, they
will be guided through a set of steps to help them configure their Ceph
cluster: expanding the cluster by adding more hosts, detecting and
defining their storage devices, and finally deploying and configuring
the different Ceph services.

* OSD: When using mclock_scheduler for QoS, there is no longer a need to
run any manual benchmark. The OSD now automatically sets an appropriate
value for osd_mclock_max_capacity_iops by running a simple benchmark
during initialization.

* MGR: The global recovery event in the progress module has been
optimized and a sleep_interval of 5 seconds has been added between stats
collection, to reduce the impact of the progress module on the MGR,
especially in large clusters.


Getting Ceph
------------
* Git at git://github.com/ceph/ceph.git
* Tarball at https://download.ceph.com/tarballs/ceph-16.2.7.tar.gz
* Containers at https://quay.io/repository/ceph/ceph
* For packages, see https://docs.ceph.com/docs/master/install/get-packages/
* Release git sha1: dd0603118f56ab514f133c8d2e3adfc983942503

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux