Pacific release candidate v16.1.0 is out

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



There are just a couple remaining issues before the final release.
Please test it out and report any bugs.

The full release notes are in progress here [0].

Notable Changes
---------------

* New ``bluestore_rocksdb_options_annex`` config
  parameter. Complements ``bluestore_rocksdb_options`` and allows
  setting rocksdb options without repeating the existing defaults.

* The cephfs addes two new CDentry tags, 'I' --> 'i' and 'L' --> 'l',
  and on-RADOS metadata is no longer backwards compatible after
  upgraded to Pacific or a later release.

* $pid expansion in config paths like ``admin_socket`` will now
  properly expand to the daemon pid for commands like ``ceph-mds`` or
  ``ceph-osd``. Previously only ``ceph-fuse``/``rbd-nbd`` expanded
  ``$pid`` with the actual daemon pid.

* The allowable options for some ``radosgw-admin`` commands have been
  changed.

  * ``mdlog-list``, ``datalog-list``, ``sync-error-list`` no longer
    accepts start and end dates, but does accept a single optional
    start marker.  * ``mdlog-trim``, ``datalog-trim``,
    ``sync-error-trim`` only accept a single marker giving the end of
    the trimmed range.  * Similarly the date ranges and marker ranges
    have been removed on the RESTful DATALog and MDLog list and trim
    operations.

* ceph-volume: The ``lvm batch`` subcommand received a major
  rewrite. This closed a number of bugs and improves usability in
  terms of size specification and calculation, as well as idempotency
  behaviour and disk replacement process.  Please refer to
  https://docs.ceph.com/en/latest/ceph-volume/lvm/batch/ for more
  detailed information.

* Configuration variables for permitted scrub times have changed.  The
  legal values for ``osd_scrub_begin_hour`` and ``osd_scrub_end_hour``
  are 0 - 23.  The use of 24 is now illegal.  Specifying ``0`` for
  both values causes every hour to be allowed.  The legal vaues for
  ``osd_scrub_begin_week_day`` and ``osd_scrub_end_week_day`` are 0 -
  6.  The use of 7 is now illegal.  Specifying ``0`` for both values
  causes every day of the week to be allowed.

* Multiple file systems in a single Ceph cluster is now stable. New
  Ceph clusters enable support for multiple file systems by
  default. Existing clusters must still set the "enable_multiple" flag
  on the fs. Please see the CephFS documentation for more information.

* volume/nfs: Recently "ganesha-" prefix from cluster id and
  nfs-ganesha common config object was removed, to ensure consistent
  namespace across different orchestrator backends. Please delete any
  existing nfs-ganesha clusters prior to upgrading and redeploy new
  clusters after upgrading to Pacific.

* A new health check, DAEMON_OLD_VERSION, will warn if different
  versions of Ceph are running on daemons. It will generate a health
  error if multiple versions are detected.  This condition must exist
  for over mon_warn_older_version_delay (set to 1 week by default) in
  order for the health condition to be triggered.  This allows most
  upgrades to proceed without falsely seeing the warning.  If upgrade
  is paused for an extended time period, health mute can be used like
  this "ceph health mute DAEMON_OLD_VERSION --sticky".  In this case
  after upgrade has finished use "ceph health unmute
  DAEMON_OLD_VERSION".

* MGR: progress module can now be turned on/off, using the commands:
  ``ceph progress on`` and ``ceph progress off``.  * An AWS-compliant
  API: "GetTopicAttributes" was added to replace the existing
  "GetTopic" API. The new API should be used to fetch information
  about topics used for bucket notifications.

* librbd: The shared, read-only parent cache's config option
  ``immutable_object_cache_watermark`` now has been updated to
  property reflect the upper cache utilization before space is
  reclaimed. The default ``immutable_object_cache_watermark`` now is
  ``0.9``. If the capacity reaches 90% the daemon will delete cold
  cache.

* OSD: the option ``osd_fast_shutdown_notify_mon`` has been introduced
  to allow the OSD to notify the monitor it is shutting down even if
  ``osd_fast_shutdown`` is enabled. This helps with the monitor logs
  on larger clusters, that may get many 'osd.X reported immediately
  failed by osd.Y' messages, and confuse tools.

[0] https://github.com/ceph/ceph/pull/40265
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux