Re: v16.2.7 Pacific released

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi all,

The release notes are missing an upgrade step that is needed only for
clusters *not* managed by cephadm.
This was noticed in
https://lists.ceph.io/hyperkitty/list/ceph-users@xxxxxxx/thread/7KSPSUE4VO274H5XQYNFCT7HKWT75BCY/

If you are not using cephadm, you must disable FSMap sanity checks
*before starting the upgrade*:

     ceph config set mon mon_mds_skip_sanity 1

After the upgrade is finished and the cluster is stable, please remove
that setting:

    ceph config rm mon mon_mds_skip_sanity

Clusters upgraded by cephadm take care of this step automatically.

Best Regards,

Dan



On Wed, Dec 8, 2021 at 1:12 AM David Galloway <dgallowa@xxxxxxxxxx> wrote:
>
> We're happy to announce the 7th backport release in the Pacific series.
> We recommend all users upgrade to this release.
>
> Notable Changes
> ---------------
>
> * Critical bug in OMAP format upgrade is fixed. This could cause data
> corruption (improperly formatted OMAP keys) after pre-Pacific cluster
> upgrade if bluestore-quick-fix-on-mount parameter is set to true or
> ceph-bluestore-tool's quick-fix/repair commands are invoked. Relevant
> tracker: https://tracker.ceph.com/issues/53062.
> bluestore-quick-fix-on-mount continues to be set to false, by default.
>
> * MGR: The pg_autoscaler will use the 'scale-up' profile as the default
> profile. 16.2.6 changed the default profile to 'scale-down' but we ran
> into issues with the device_health_metrics pool consuming too many PGs,
> which is not ideal for performance. So we will continue to use the
> 'scale-up' profile by default,  until we implement a limit on the number
> of PGs default pools should consume, in combination with the
> 'scale-down' profile.
>
> * Cephadm & Ceph Dashboard: NFS management has been completely reworked
> to ensure that NFS exports are managed consistently across the different
> Ceph components. Prior to this, there were 3 incompatible
> implementations for configuring the NFS exports: Ceph-Ansible/OpenStack
> Manila, Ceph Dashboard and 'mgr/nfs' module. With this release the
> 'mgr/nfs' way becomes the official interface, and the remaining
> components (Cephadm and Ceph Dashboard) adhere to it. While this might
> require manually migrating from the deprecated implementations, it will
> simplify the user experience for those heavily relying on NFS exports.
>
> * Dashboard: "Cluster Expansion Wizard". After the 'cephadm bootstrap'
> step, users that log into the Ceph Dashboard will be presented with a
> welcome screen. If they choose to follow the installation wizard, they
> will be guided through a set of steps to help them configure their Ceph
> cluster: expanding the cluster by adding more hosts, detecting and
> defining their storage devices, and finally deploying and configuring
> the different Ceph services.
>
> * OSD: When using mclock_scheduler for QoS, there is no longer a need to
> run any manual benchmark. The OSD now automatically sets an appropriate
> value for osd_mclock_max_capacity_iops by running a simple benchmark
> during initialization.
>
> * MGR: The global recovery event in the progress module has been
> optimized and a sleep_interval of 5 seconds has been added between stats
> collection, to reduce the impact of the progress module on the MGR,
> especially in large clusters.
>
>
> Getting Ceph
> ------------
> * Git at git://github.com/ceph/ceph.git
> * Tarball at https://download.ceph.com/tarballs/ceph-16.2.7.tar.gz
> * Containers at https://quay.io/repository/ceph/ceph
> * For packages, see https://docs.ceph.com/docs/master/install/get-packages/
> * Release git sha1: dd0603118f56ab514f133c8d2e3adfc983942503
>
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux