Re: v17.2.7 Quincy released

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



It would be nice if the dashboard changes which are very big would have been covered in the release notes, especially since they are not really backwards compatible. (See my previous messages on this topic)

On 2023-10-30 10:50, Yuri Weinstein wrote:
We're happy to announce the 7th backport release in the Quincy series.

https://ceph.io/en/news/blog/2023/v17-2-7-quincy-released/

Notable Changes
---------------

* `ceph mgr dump` command now displays the name of the Manager module that
   registered a RADOS client in the `name` field added to elements of the
   `active_clients` array. Previously, only the address of a module's RADOS
   client was shown in the `active_clients` array.

* mClock Scheduler: The mClock scheduler (default scheduler in Quincy) has
   undergone significant usability and design improvements to address the slow
   backfill issue. Some important changes are:

   * The 'balanced' profile is set as the default mClock profile because it
     represents a compromise between prioritizing client IO or recovery IO. Users
     can then choose either the 'high_client_ops' profile to prioritize client IO
     or the 'high_recovery_ops' profile to prioritize recovery IO.

   * QoS parameters including reservation and limit are now specified in terms
     of a fraction (range: 0.0 to 1.0) of the OSD's IOPS capacity.

   * The cost parameters (osd_mclock_cost_per_io_usec_* and
     osd_mclock_cost_per_byte_usec_*) have been removed. The cost of an operation
     is now determined using the random IOPS and maximum sequential bandwidth
     capability of the OSD's underlying device.

   * Degraded object recovery is given higher priority when compared to misplaced
     object recovery because degraded objects present a data safety issue not
     present with objects that are merely misplaced. Therefore, backfilling
     operations with the 'balanced' and 'high_client_ops' mClock profiles may
     progress slower than what was seen with the 'WeightedPriorityQueue' (WPQ)
     scheduler.

   * The QoS allocations in all mClock profiles are optimized based on the above
     fixes and enhancements.

   * For more detailed information see:
     https://docs.ceph.com/en/quincy/rados/configuration/mclock-config-ref/

* RGW: S3 multipart uploads using Server-Side Encryption now replicate
   correctly in multi-site. Previously, the replicas of such objects were
   corrupted on decryption.  A new tool, ``radosgw-admin bucket resync encrypted
   multipart``, can be used to identify these original multipart uploads. The
   ``LastModified`` timestamp of any identified object is incremented by 1
   nanosecond to cause peer zones to replicate it again.  For multi-site
   deployments that make any use of Server-Side Encryption, we recommended
   running this command against every bucket in every zone after all zones have
   upgraded.

* CephFS: MDS evicts clients which are not advancing their request tids which
   causes a large buildup of session metadata resulting in the MDS going
   read-only due to the RADOS operation exceeding the size threshold.
   `mds_session_metadata_threshold` config controls the maximum size that a
   (encoded) session metadata can grow.

* CephFS: After recovering a Ceph File System post following the disaster
   recovery procedure, the recovered files under `lost+found` directory can now
   be deleted.

Getting Ceph
------------
* Git at git://github.com/ceph/ceph.git
* Tarball athttps://download.ceph.com/tarballs/ceph-17.2.7.tar.gz
* Containers athttps://quay.io/repository/ceph/ceph
* For packages, seehttps://docs.ceph.com/en/latest/install/get-packages/
* Release git sha1: b12291d110049b2f35e32e0de30d70e9a4c060d2
_______________________________________________
ceph-users mailing list --ceph-users@xxxxxxx
To unsubscribe send an email toceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux