v14.2.22 Nautilus released

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



We're happy to announce the 22nd and likely final backport release in the Nautilus series. Ultimately, we recommend all users upgrade to newer Ceph releases.

For a detailed release notes with links & changelog please refer to the official blog entry at https://ceph.io/en/news/blog/2021/v14-2-22-nautilus-released

Notable Changes
---------------

* This release sets `bluefs_buffered_io` to true by default to improve performance
  for metadata heavy workloads. Enabling this option has been reported to
  occasionally cause excessive kernel swapping under certain workloads.
  Currently, the most consistent performing combination is to enable
  bluefs_buffered_io and disable system level swap.

* The default value of `bluestore_cache_trim_max_skip_pinned` has been
  increased to 1000 to control memory growth due to onodes.

* Several other bug fixes in BlueStore, including a fix for an unexpected
  ENOSPC bug in Avl/Hybrid allocators.

* The trimming logic in the monitor has been made dynamic, with the
  introduction of `paxos_service_trim_max_multiplier`, a factor by which
  `paxos_service_trim_max` is multiplied to make trimming faster,
  when required. Setting it to 0 disables the upper bound check for trimming
  and makes the monitors trim at the maximum rate.

* A `--max <n>` option is available with the `osd ok-to-stop` command to
  provide up to N OSDs that can be stopped together without making PGs
  unavailable.

* OSD: the option `osd_fast_shutdown_notify_mon` has been introduced to allow
  the OSD to notify the monitor it is shutting down even if `osd_fast_shutdown`
  is enabled. This helps with the monitor logs on larger clusters, that may get
  many 'osd.X reported immediately failed by osd.Y' messages, and confuse tools.

* A long-standing bug that prevented 32-bit and 64-bit client/server
  interoperability under msgr v2 has been fixed.  In particular, mixing armv7l
  (armhf) and x86_64 or aarch64 servers in the same cluster now works.

Getting Ceph
------------
* Git at git://github.com/ceph/ceph.git
* Tarball at https://download.ceph.com/tarballs/ceph-14.2.22.tar.gz
* For packages, see https://docs.ceph.com/docs/master/install/get-packages/
* Release git sha1: ca74598065096e6fcbd8433c8779a2be0c889351
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux