CephFS snapshot is now stable and enabled by default on new filesystems
:)
|
MonSiteEstLent.com - Blog dédié à la webperformance et la gestion de pics de trafic |
De: "ceph" <ceph@xxxxxxxxxxxxxx>
À: "ceph-users" <ceph-users@xxxxxxxxxxxxxx>
Envoyé: Vendredi 1 Juin 2018 14:48:13
Objet: Fwd: v13.2.0 Mimic is out
À: "ceph-users" <ceph-users@xxxxxxxxxxxxxx>
Envoyé: Vendredi 1 Juin 2018 14:48:13
Objet: Fwd: v13.2.0 Mimic is out
FYI
De: "Abhishek" <abhishek@xxxxxxxx> À: "ceph-devel"
<ceph-devel@xxxxxxxxxxxxxxx>, "ceph-users" <ceph-users@xxxxxxxx>,
ceph-maintainers@xxxxxxxx, ceph-announce@xxxxxxxx Envoyé: Vendredi 1
Juin 2018 14:11:00 Objet: v13.2.0 Mimic is out
We're glad to announce the first stable release of Mimic, the next long
term release series. There have been major changes since Luminous and
please read the upgrade notes carefully.
We'd also like to highlight that we've had contributions from over 282
contributors, for Mimic, and would like to thank everyone for the
continued support. The next major release of Ceph will be called Nautilus.
For the detailed changelog, please refer to the release blog at
https://ceph.com/releases/v13-2-0-mimic-released/
Major Changes from Luminous ---------------------------
- *Dashboard*:
* The (read-only) Ceph manager dashboard introduced in Ceph Luminous has
been replaced with a new implementation inspired by and derived from the
openATTIC[1] Ceph management tool, providing a drop-in replacement
offering a number of additional management features
- *RADOS*:
* Config options can now be centrally stored and managed by the monitor.
* The monitor daemon uses significantly less disk space when undergoing
recovery or rebalancing operations. * An *async recovery* feature
reduces the tail latency of requests when the OSDs are recovering from a
recent failure. * OSD preemption of scrub by conflicting requests
reduces tail latency.
- *RGW*:
* RGW can now replicate a zone (or a subset of buckets) to an external
cloud storage service like S3. * RGW now supports the S3 multi-factor
authentication api on versioned buckets. * The Beast frontend is no long
expermiental and is considered stable and ready for use.
- *CephFS*:
* Snapshots are now stable when combined with multiple MDS daemons.
- *RBD*:
* Image clones no longer require explicit *protect* and *unprotect*
steps. * Images can be deep-copied (including any clone linkage to a
parent image and associated snapshots) to new pools or with altered data
layouts.
Upgrading from Luminous -----------------------
Notes ~~~~~
* We recommend you avoid creating any RADOS pools while the upgrade is
in process.
* You can monitor the progress of your upgrade at each stage with the
`ceph versions` command, which will tell you what ceph version(s) are
running for each type of daemon.
Instructions ~~~~~~~~~~~~
#. Make sure your cluster is stable and healthy (no down or recoverying
OSDs). (Optional, but recommended.)
#. Set the `noout` flag for the duration of the upgrade. (Optional, but
recommended.)::
# ceph osd set noout
#. Upgrade monitors by installing the new packages and restarting the
monitor daemons.::
# systemctl restart ceph-mon.target
Verify the monitor upgrade is complete once all monitors are up by
looking for the `mimic` feature string in the mon map. For example::
# ceph mon feature ls
should include `mimic` under persistent features::
on current monmap (epoch NNN) persistent: [kraken,luminous,mimic]
required: [kraken,luminous,mimic]
#. Upgrade `ceph-mgr` daemons by installing the new packages and
restarting with::
# systemctl restart ceph-mgr.target
Verify the ceph-mgr daemons are running by checking `ceph -s`::
# ceph -s
... services: mon: 3 daemons, quorum foo,bar,baz mgr: foo(active),
standbys: bar, baz ...
#. Upgrade all OSDs by installing the new packages and restarting the
ceph-osd daemons on all hosts::
# systemctl restart ceph-osd.target
You can monitor the progress of the OSD upgrades with the new `ceph
versions` or `ceph osd versions` command::
# ceph osd versions { "ceph version 12.2.5 (...) luminous (stable)": 12,
"ceph version 13.2.0 (...) mimic (stable)": 22, }
#. Upgrade all CephFS MDS daemons. For each CephFS file system,
#. Reduce the number of ranks to 1. (Make note of the original number of
MDS daemons first if you plan to restore it later.)::
# ceph status # ceph fs set <fs_name> max_mds 1
#. Wait for the cluster to deactivate any non-zero ranks by periodically
checking the status::
# ceph status
#. Take all standby MDS daemons offline on the appropriate hosts with::
# systemctl stop ceph-mds@<daemon_name>
#. Confirm that only one MDS is online and is rank 0 for your FS::
# ceph status
#. Upgrade the last remaining MDS daemon by installing the new packages
and restarting the daemon::
# systemctl restart ceph-mds.target
#. Restart all standby MDS daemons that were taken offline::
# systemctl start ceph-mds.target
#. Restore the original value of `max_mds` for the volume::
# ceph fs set <fs_name> max_mds <original_max_mds>
#. Upgrade all radosgw daemons by upgrading packages and restarting
daemons on all hosts::
# systemctl restart radosgw.target
#. Complete the upgrade by disallowing pre-mimic OSDs and enabling all
new Mimic-only functionality::
# ceph osd require-osd-release mimic
#. If you set `noout` at the beginning, be sure to clear it with::
# ceph osd unset noout
#. Verify the cluster is healthy with `ceph health`.
Upgrading from pre-Luminous releases (like Jewel)
-------------------------------------------------
You *must* first upgrade to Luminous (12.2.z) before attempting an
upgrade to Mimic.
Upgrade compatibility notes ---------------------------
These changes occurred between the Luminous and Mimic releases.
* *core*:
- The `pg force-recovery` command will not work for erasure-coded PGs
when a Luminous monitor is running along with a Mimic OSD. Please use
the recommended upgrade order of monitors before OSDs to avoid this issue.
- The sample `crush-location-hook` script has been removed. Its output
is equivalent to the built-in default behavior, so it has been replaced
with an example in the CRUSH documentation.
- The `-f` option of the rados tool now means `--format` instead of
`--force`, for consistency with the ceph tool.
- The format of the `config diff` output via the admin socket has
changed. It now reflects the source of each config option (e.g.,
default, config file, command line) as well as the final (active) value.
- Commands variously marked as `del`, `delete`, `remove` etc. should now
all be normalized as `rm`. Commands already supporting alternatives to
`rm` remain backward-compatible. This changeset applies to the
`radosgw-admin` tool as well.
- Monitors will now prune on-disk full maps if the number of maps grows
above a certain number (mon_osdmap_full_prune_min, default: 10000), thus
preventing unbounded growth of the monitor data store. This feature is
enabled by default, and can be disabled by setting
`mon_osdmap_full_prune_enabled` to false.
- *rados list-inconsistent-obj format changes:*
+ Various error strings have been improved. For example, the "oi" or
"oi_attr" in errors which stands for object info is now "info" (e.g.
oi_attr_missing is now info_missing).
+ The object's "selected_object_info" is now in json format instead of
string.
+ The attribute errors (attr_value_mismatch, attr_name_mismatch) only
apply to user attributes. Only user attributes are output and have the
internal leading underscore stripped.
+ If there are hash information errors (hinfo_missing, hinfo_corrupted,
hinfo_inconsistency) then "hashinfo" is added with the json format of
the information. If the information is corrupt then "hashinfo" is a
string containing the value.
+ If there are snapset errors (snapset_missing, snapset_corrupted,
snapset_inconsistency) then "snapset" is added with the json format of
the information. If the information is corrupt then "snapset" is a
string containing the value.
+ If there are object information errors (info_missing, info_corrupted,
obj_size_info_mismatch, object_info_inconsistency) then "object_info" is
added with the json format of the information instead of a string. If
the information is corrupt then "object_info" is a string containing the
value.
- *rados list-inconsistent-snapset format changes:*
+ Various error strings have been improved. For example, the "ss_attr"
in errors which stands for snapset info is now "snapset" (e.g.
ss_attr_missing is now snapset_missing). The error snapset_mismatch has
been renamed to snapset_error to better reflect what it means.
+ The head snapset information is output in json format as "snapset."
This means that even when there are no head errors, the head object will
be output when any shard has an error. This head object is there to show
the snapset that was used in determining errors.
- The `osd_mon_report_interval_min` option has been renamed to
`osd_mon_report_interval`, and the `osd_mon_report_interval_max`
(unused) has been eliminated. If this value has been customized on your
cluster then your configuration should be adjusted in order to avoid
reverting to the default value.
- The config-key interface can store arbitrary binary blobs but JSON can
only express printable strings. If binary blobs are present, the 'ceph
config-key dump' command will show them as something like `<<< binary
blob of length N >>>`.
- Bootstrap auth keys will now be generated automatically on a fresh
deployment; these keys will also be generated, if missing, during upgrade.
- The `osd force-create-pg` command now requires a force option to
proceed because the command is dangerous: it declares that data loss is
permanent and instructs the cluster to proceed with an empty PG in its
place, without making any further efforts to find the missing data.
*CephFS*:
- Upgrading an MDS cluster to 12.2.3+ will result in all active MDS
exiting due to feature incompatibilities once an upgraded MDS comes
online (even as standby). Operators may ignore the error messages and
continue upgrading/restarting or follow this upgrade sequence:
After upgrading the monitors to Mimic, reduce the number of ranks to 1
(`ceph fs set <fs_name> max_mds 1`), wait for all other MDS to
deactivate, leaving the one active MDS, stop all standbys, upgrade the
single active MDS, then upgrade/start standbys. Finally, restore the
previous max_mds.
!! NOTE: see release notes on snapshots in CephFS if you have ever
enabled snapshots on your file system.
See also: https://tracker.ceph.com/issues/23172
- Several `ceph mds ...` commands have been obsoleted and replaced by
equivalent `ceph fs ...` commands:
+ `mds dump` -> `fs dump` + `mds getmap` -> `fs dump` + `mds stop` ->
`mds deactivate` + `mds set_max_mds` -> `fs set max_mds` + `mds set` ->
`fs set` + `mds cluster_down` -> `fs set cluster_down true` + `mds
cluster_up` -> `fs set cluster_down false` + `mds add_data_pool` -> `fs
add_data_pool` + `mds remove_data_pool` -> `fs rm_data_pool` + `mds
rm_data_pool` -> `fs rm_data_pool`
- New CephFS file system attributes session_timeout and
session_autoclose are configurable via `ceph fs set`. The MDS config
options `mds_session_timeout`, `mds_session_autoclose`, and
`mds_max_file_size` are now obsolete.
- As the multiple MDS feature is now standard, it is now enabled by
default. `ceph fs set allow_multimds` is now deprecated and will be
removed in a future release.
- As the directory fragmentation feature is now standard, it is now
enabled by default. `ceph fs set allow_dirfrags` is now deprecated and
will be removed in a future release.
- MDS daemons now activate and deactivate based on the value of
`max_mds`. Accordingly, `ceph mds deactivate` has been deprecated as it
is now redundant.
- Taking a CephFS cluster down is now done by setting the down flag
which deactivates all MDS. For example: `ceph fs set cephfs down true`.
- Preventing standbys from joining as new actives (formerly the now
deprecated cluster_down flag) on a file system is now accomplished by
setting the joinable flag. This is useful mostly for testing so that a
file system may be quickly brought down and deleted.
- New CephFS file system attributes session_timeout and
session_autoclose are configurable via `ceph fs set`. The MDS config
options mds_session_timeout, mds_session_autoclose, and
mds_max_file_size are now obsolete.
- Each mds rank now maintains a table that tracks open files and their
ancestor directories. Recovering MDS can quickly get open files' paths,
significantly reducing the time of loading inodes for open files. MDS
creates the table automatically if it does not exist.
- CephFS snapshot is now stable and enabled by default on new
filesystems. To enable snapshot on existing filesystems, use the command::
ceph fs set <fs_name> allow_new_snaps
The on-disk format of snapshot metadata has changed. The old format
metadata can not be properly handled in multiple active MDS
configuration. To guarantee all snapshot metadata on existing
filesystems get updated, perform the sequence of upgrading the MDS
cluster strictly.
See http://docs.ceph.com/docs/mimic/cephfs/upgrading/
For filesystems that have ever enabled snapshots, the multiple-active
MDS feature is disabled by the mimic monitor daemon. This will cause the
"restore previous max_mds" step in above URL to fail. To re-enable the
feature, either delete all old snapshots or scrub the whole filesystem:
- `ceph daemon <mds of rank 0> scrub_path /` - `ceph daemon <mds of rank
0> scrub_path '~mdsdir'`
- Support has been added in Mimic for quotas in the Linux kernel client
as of v4.17.
See http://docs.ceph.com/docs/mimic/cephfs/quota/
- Many fixes have been made to the MDS metadata balancer which
distributes load across MDS. It is expected that the automatic balancing
should work well for most use-cases. In Luminous, subtree pinning was
advised as a manual workaround for poor balancer behavior. This may no
longer be necessary so it is recommended to try experimentally disabling
pinning as a form of load balancing to see if the built-in balancer
adequately works for you. Please report any poor behavior post-upgrade.
- NFS-Ganesha is an NFS userspace server that can export shares from
multiple file systems, including CephFS. Support for this CephFS client
has improved significantly in Mimic. In particular, delegations are now
supported through the libcephfs library so that Ganesha may issue
delegations to its NFS clients allowing for safe write buffering and
coherent read caching. Documentation is also now available:
http://docs.ceph.com/docs/mimic/cephfs/nfs/
- MDS uptime is now available in the output of the MDS admin socket
`status` command.
- MDS performance counters for client requests now include average
latency as well as the count.
* *RBD*
- The RBD C API's `rbd_discard` method now enforces a maximum length of
2GB to match the C++ API's `Image::discard` method. This restriction
prevents overflow of the result code.
- The rbd CLI's `lock list` JSON and XML output has changed.
- The rbd CLI's `showmapped` JSON and XML output has changed.
- RBD now optionally supports simplified image clone semantics where
non-protected snapshots can be cloned; and snapshots with linked clones
can be removed and the space automatically reclaimed once all remaining
linked clones are detached. This feature is enabled by default if the
OSD "require-min-compat-client" flag is set to mimic or later; or can be
overridden via the "rbd_default_clone_format" configuration option.
- RBD now supports deep copy of images that preserves snapshot history.
* *RGW*
- The RGW Beast frontend is now declared stable and ready for production
use. :ref:`rgw_frontends` for details.
- Civetweb frontend has been updated to the latest 1.10 release.
- The S3 API now has support for multi-factor authentication. Refer to
:ref:`rgw_mfa` for details.
- RGW now has a sync plugin to sync to AWS and clouds with S3-like APIs.
* *MGR*
- The (read-only) Ceph manager dashboard introduced in Ceph Luminous has
been replaced with a new implementation, providing a drop-in replacement
offering a number of additional management features. To access the new
dashboard, you first need to define a username and password and create
an SSL certificate. See the :ref:`dashboard documentation
<mgr-dashboard-overview>` for a feature overview and installation
instructions.
- The `ceph-rest-api` command-line tool (obsoleted by the MGR `restful`
module and deprecated since v12.2.5) has been dropped.
There is a MGR module called `restful` which provides similar
functionality via a "pass through" method. See
http://docs.ceph.com/docs/master/mgr/restful for details.
- New command to track throughput and IOPS statistics, also available in
`ceph -s` and previously in `ceph -w`. To use this command, enable the
`iostat` Manager module and invoke it using `ceph iostat`. See the
:ref:`iostat documentation <mgr-iostat-overview>` for details.
* *build/packaging*
- The `rcceph` script (`systemd/ceph` in the source code tree, shipped
as `/usr/sbin/rcceph` in the ceph-base package for CentOS and SUSE) has
been dropped. This script was used to perform admin operations (start,
stop, restart, etc.) on all OSD and/or MON daemons running on a given
machine. This functionality is provided by the systemd target units
(`ceph-osd.target`, `ceph-mon.target`, etc.).
- The python-ceph-compat package is declared deprecated, and will be
dropped when all supported distros have completed the move to Python 3.
It has already been dropped from those supported distros where Python 3
is standard and Python 2 is optional (currently only SUSE).
- Ceph codebase has now moved to the C++-17 standard.
- The Ceph LZ4 compression plugin is now enabled by default, and
introduces a new build dependency.
[1]: https://openattic.org
Getting Ceph ------------
* Git at git://github.com/ceph/ceph.git * Tarball at
http://download.ceph.com/tarballs/ceph-13.2.0.tar.gz * For packages, see
http://docs.ceph.com/docs/master/install/get-packages/ * Release git
sha1: f38fff5d093da678f6736c7a008511873c8d0fda -- To unsubscribe from
this list: send the line "unsubscribe ceph-devel" in the body of a
message to majordomo@xxxxxxxxxxxxxxx More majordomo info at
http://vger.kernel.org/majordomo-info.html
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
De: "Abhishek" <abhishek@xxxxxxxx> À: "ceph-devel"
<ceph-devel@xxxxxxxxxxxxxxx>, "ceph-users" <ceph-users@xxxxxxxx>,
ceph-maintainers@xxxxxxxx, ceph-announce@xxxxxxxx Envoyé: Vendredi 1
Juin 2018 14:11:00 Objet: v13.2.0 Mimic is out
We're glad to announce the first stable release of Mimic, the next long
term release series. There have been major changes since Luminous and
please read the upgrade notes carefully.
We'd also like to highlight that we've had contributions from over 282
contributors, for Mimic, and would like to thank everyone for the
continued support. The next major release of Ceph will be called Nautilus.
For the detailed changelog, please refer to the release blog at
https://ceph.com/releases/v13-2-0-mimic-released/
Major Changes from Luminous ---------------------------
- *Dashboard*:
* The (read-only) Ceph manager dashboard introduced in Ceph Luminous has
been replaced with a new implementation inspired by and derived from the
openATTIC[1] Ceph management tool, providing a drop-in replacement
offering a number of additional management features
- *RADOS*:
* Config options can now be centrally stored and managed by the monitor.
* The monitor daemon uses significantly less disk space when undergoing
recovery or rebalancing operations. * An *async recovery* feature
reduces the tail latency of requests when the OSDs are recovering from a
recent failure. * OSD preemption of scrub by conflicting requests
reduces tail latency.
- *RGW*:
* RGW can now replicate a zone (or a subset of buckets) to an external
cloud storage service like S3. * RGW now supports the S3 multi-factor
authentication api on versioned buckets. * The Beast frontend is no long
expermiental and is considered stable and ready for use.
- *CephFS*:
* Snapshots are now stable when combined with multiple MDS daemons.
- *RBD*:
* Image clones no longer require explicit *protect* and *unprotect*
steps. * Images can be deep-copied (including any clone linkage to a
parent image and associated snapshots) to new pools or with altered data
layouts.
Upgrading from Luminous -----------------------
Notes ~~~~~
* We recommend you avoid creating any RADOS pools while the upgrade is
in process.
* You can monitor the progress of your upgrade at each stage with the
`ceph versions` command, which will tell you what ceph version(s) are
running for each type of daemon.
Instructions ~~~~~~~~~~~~
#. Make sure your cluster is stable and healthy (no down or recoverying
OSDs). (Optional, but recommended.)
#. Set the `noout` flag for the duration of the upgrade. (Optional, but
recommended.)::
# ceph osd set noout
#. Upgrade monitors by installing the new packages and restarting the
monitor daemons.::
# systemctl restart ceph-mon.target
Verify the monitor upgrade is complete once all monitors are up by
looking for the `mimic` feature string in the mon map. For example::
# ceph mon feature ls
should include `mimic` under persistent features::
on current monmap (epoch NNN) persistent: [kraken,luminous,mimic]
required: [kraken,luminous,mimic]
#. Upgrade `ceph-mgr` daemons by installing the new packages and
restarting with::
# systemctl restart ceph-mgr.target
Verify the ceph-mgr daemons are running by checking `ceph -s`::
# ceph -s
... services: mon: 3 daemons, quorum foo,bar,baz mgr: foo(active),
standbys: bar, baz ...
#. Upgrade all OSDs by installing the new packages and restarting the
ceph-osd daemons on all hosts::
# systemctl restart ceph-osd.target
You can monitor the progress of the OSD upgrades with the new `ceph
versions` or `ceph osd versions` command::
# ceph osd versions { "ceph version 12.2.5 (...) luminous (stable)": 12,
"ceph version 13.2.0 (...) mimic (stable)": 22, }
#. Upgrade all CephFS MDS daemons. For each CephFS file system,
#. Reduce the number of ranks to 1. (Make note of the original number of
MDS daemons first if you plan to restore it later.)::
# ceph status # ceph fs set <fs_name> max_mds 1
#. Wait for the cluster to deactivate any non-zero ranks by periodically
checking the status::
# ceph status
#. Take all standby MDS daemons offline on the appropriate hosts with::
# systemctl stop ceph-mds@<daemon_name>
#. Confirm that only one MDS is online and is rank 0 for your FS::
# ceph status
#. Upgrade the last remaining MDS daemon by installing the new packages
and restarting the daemon::
# systemctl restart ceph-mds.target
#. Restart all standby MDS daemons that were taken offline::
# systemctl start ceph-mds.target
#. Restore the original value of `max_mds` for the volume::
# ceph fs set <fs_name> max_mds <original_max_mds>
#. Upgrade all radosgw daemons by upgrading packages and restarting
daemons on all hosts::
# systemctl restart radosgw.target
#. Complete the upgrade by disallowing pre-mimic OSDs and enabling all
new Mimic-only functionality::
# ceph osd require-osd-release mimic
#. If you set `noout` at the beginning, be sure to clear it with::
# ceph osd unset noout
#. Verify the cluster is healthy with `ceph health`.
Upgrading from pre-Luminous releases (like Jewel)
-------------------------------------------------
You *must* first upgrade to Luminous (12.2.z) before attempting an
upgrade to Mimic.
Upgrade compatibility notes ---------------------------
These changes occurred between the Luminous and Mimic releases.
* *core*:
- The `pg force-recovery` command will not work for erasure-coded PGs
when a Luminous monitor is running along with a Mimic OSD. Please use
the recommended upgrade order of monitors before OSDs to avoid this issue.
- The sample `crush-location-hook` script has been removed. Its output
is equivalent to the built-in default behavior, so it has been replaced
with an example in the CRUSH documentation.
- The `-f` option of the rados tool now means `--format` instead of
`--force`, for consistency with the ceph tool.
- The format of the `config diff` output via the admin socket has
changed. It now reflects the source of each config option (e.g.,
default, config file, command line) as well as the final (active) value.
- Commands variously marked as `del`, `delete`, `remove` etc. should now
all be normalized as `rm`. Commands already supporting alternatives to
`rm` remain backward-compatible. This changeset applies to the
`radosgw-admin` tool as well.
- Monitors will now prune on-disk full maps if the number of maps grows
above a certain number (mon_osdmap_full_prune_min, default: 10000), thus
preventing unbounded growth of the monitor data store. This feature is
enabled by default, and can be disabled by setting
`mon_osdmap_full_prune_enabled` to false.
- *rados list-inconsistent-obj format changes:*
+ Various error strings have been improved. For example, the "oi" or
"oi_attr" in errors which stands for object info is now "info" (e.g.
oi_attr_missing is now info_missing).
+ The object's "selected_object_info" is now in json format instead of
string.
+ The attribute errors (attr_value_mismatch, attr_name_mismatch) only
apply to user attributes. Only user attributes are output and have the
internal leading underscore stripped.
+ If there are hash information errors (hinfo_missing, hinfo_corrupted,
hinfo_inconsistency) then "hashinfo" is added with the json format of
the information. If the information is corrupt then "hashinfo" is a
string containing the value.
+ If there are snapset errors (snapset_missing, snapset_corrupted,
snapset_inconsistency) then "snapset" is added with the json format of
the information. If the information is corrupt then "snapset" is a
string containing the value.
+ If there are object information errors (info_missing, info_corrupted,
obj_size_info_mismatch, object_info_inconsistency) then "object_info" is
added with the json format of the information instead of a string. If
the information is corrupt then "object_info" is a string containing the
value.
- *rados list-inconsistent-snapset format changes:*
+ Various error strings have been improved. For example, the "ss_attr"
in errors which stands for snapset info is now "snapset" (e.g.
ss_attr_missing is now snapset_missing). The error snapset_mismatch has
been renamed to snapset_error to better reflect what it means.
+ The head snapset information is output in json format as "snapset."
This means that even when there are no head errors, the head object will
be output when any shard has an error. This head object is there to show
the snapset that was used in determining errors.
- The `osd_mon_report_interval_min` option has been renamed to
`osd_mon_report_interval`, and the `osd_mon_report_interval_max`
(unused) has been eliminated. If this value has been customized on your
cluster then your configuration should be adjusted in order to avoid
reverting to the default value.
- The config-key interface can store arbitrary binary blobs but JSON can
only express printable strings. If binary blobs are present, the 'ceph
config-key dump' command will show them as something like `<<< binary
blob of length N >>>`.
- Bootstrap auth keys will now be generated automatically on a fresh
deployment; these keys will also be generated, if missing, during upgrade.
- The `osd force-create-pg` command now requires a force option to
proceed because the command is dangerous: it declares that data loss is
permanent and instructs the cluster to proceed with an empty PG in its
place, without making any further efforts to find the missing data.
*CephFS*:
- Upgrading an MDS cluster to 12.2.3+ will result in all active MDS
exiting due to feature incompatibilities once an upgraded MDS comes
online (even as standby). Operators may ignore the error messages and
continue upgrading/restarting or follow this upgrade sequence:
After upgrading the monitors to Mimic, reduce the number of ranks to 1
(`ceph fs set <fs_name> max_mds 1`), wait for all other MDS to
deactivate, leaving the one active MDS, stop all standbys, upgrade the
single active MDS, then upgrade/start standbys. Finally, restore the
previous max_mds.
!! NOTE: see release notes on snapshots in CephFS if you have ever
enabled snapshots on your file system.
See also: https://tracker.ceph.com/issues/23172
- Several `ceph mds ...` commands have been obsoleted and replaced by
equivalent `ceph fs ...` commands:
+ `mds dump` -> `fs dump` + `mds getmap` -> `fs dump` + `mds stop` ->
`mds deactivate` + `mds set_max_mds` -> `fs set max_mds` + `mds set` ->
`fs set` + `mds cluster_down` -> `fs set cluster_down true` + `mds
cluster_up` -> `fs set cluster_down false` + `mds add_data_pool` -> `fs
add_data_pool` + `mds remove_data_pool` -> `fs rm_data_pool` + `mds
rm_data_pool` -> `fs rm_data_pool`
- New CephFS file system attributes session_timeout and
session_autoclose are configurable via `ceph fs set`. The MDS config
options `mds_session_timeout`, `mds_session_autoclose`, and
`mds_max_file_size` are now obsolete.
- As the multiple MDS feature is now standard, it is now enabled by
default. `ceph fs set allow_multimds` is now deprecated and will be
removed in a future release.
- As the directory fragmentation feature is now standard, it is now
enabled by default. `ceph fs set allow_dirfrags` is now deprecated and
will be removed in a future release.
- MDS daemons now activate and deactivate based on the value of
`max_mds`. Accordingly, `ceph mds deactivate` has been deprecated as it
is now redundant.
- Taking a CephFS cluster down is now done by setting the down flag
which deactivates all MDS. For example: `ceph fs set cephfs down true`.
- Preventing standbys from joining as new actives (formerly the now
deprecated cluster_down flag) on a file system is now accomplished by
setting the joinable flag. This is useful mostly for testing so that a
file system may be quickly brought down and deleted.
- New CephFS file system attributes session_timeout and
session_autoclose are configurable via `ceph fs set`. The MDS config
options mds_session_timeout, mds_session_autoclose, and
mds_max_file_size are now obsolete.
- Each mds rank now maintains a table that tracks open files and their
ancestor directories. Recovering MDS can quickly get open files' paths,
significantly reducing the time of loading inodes for open files. MDS
creates the table automatically if it does not exist.
- CephFS snapshot is now stable and enabled by default on new
filesystems. To enable snapshot on existing filesystems, use the command::
ceph fs set <fs_name> allow_new_snaps
The on-disk format of snapshot metadata has changed. The old format
metadata can not be properly handled in multiple active MDS
configuration. To guarantee all snapshot metadata on existing
filesystems get updated, perform the sequence of upgrading the MDS
cluster strictly.
See http://docs.ceph.com/docs/mimic/cephfs/upgrading/
For filesystems that have ever enabled snapshots, the multiple-active
MDS feature is disabled by the mimic monitor daemon. This will cause the
"restore previous max_mds" step in above URL to fail. To re-enable the
feature, either delete all old snapshots or scrub the whole filesystem:
- `ceph daemon <mds of rank 0> scrub_path /` - `ceph daemon <mds of rank
0> scrub_path '~mdsdir'`
- Support has been added in Mimic for quotas in the Linux kernel client
as of v4.17.
See http://docs.ceph.com/docs/mimic/cephfs/quota/
- Many fixes have been made to the MDS metadata balancer which
distributes load across MDS. It is expected that the automatic balancing
should work well for most use-cases. In Luminous, subtree pinning was
advised as a manual workaround for poor balancer behavior. This may no
longer be necessary so it is recommended to try experimentally disabling
pinning as a form of load balancing to see if the built-in balancer
adequately works for you. Please report any poor behavior post-upgrade.
- NFS-Ganesha is an NFS userspace server that can export shares from
multiple file systems, including CephFS. Support for this CephFS client
has improved significantly in Mimic. In particular, delegations are now
supported through the libcephfs library so that Ganesha may issue
delegations to its NFS clients allowing for safe write buffering and
coherent read caching. Documentation is also now available:
http://docs.ceph.com/docs/mimic/cephfs/nfs/
- MDS uptime is now available in the output of the MDS admin socket
`status` command.
- MDS performance counters for client requests now include average
latency as well as the count.
* *RBD*
- The RBD C API's `rbd_discard` method now enforces a maximum length of
2GB to match the C++ API's `Image::discard` method. This restriction
prevents overflow of the result code.
- The rbd CLI's `lock list` JSON and XML output has changed.
- The rbd CLI's `showmapped` JSON and XML output has changed.
- RBD now optionally supports simplified image clone semantics where
non-protected snapshots can be cloned; and snapshots with linked clones
can be removed and the space automatically reclaimed once all remaining
linked clones are detached. This feature is enabled by default if the
OSD "require-min-compat-client" flag is set to mimic or later; or can be
overridden via the "rbd_default_clone_format" configuration option.
- RBD now supports deep copy of images that preserves snapshot history.
* *RGW*
- The RGW Beast frontend is now declared stable and ready for production
use. :ref:`rgw_frontends` for details.
- Civetweb frontend has been updated to the latest 1.10 release.
- The S3 API now has support for multi-factor authentication. Refer to
:ref:`rgw_mfa` for details.
- RGW now has a sync plugin to sync to AWS and clouds with S3-like APIs.
* *MGR*
- The (read-only) Ceph manager dashboard introduced in Ceph Luminous has
been replaced with a new implementation, providing a drop-in replacement
offering a number of additional management features. To access the new
dashboard, you first need to define a username and password and create
an SSL certificate. See the :ref:`dashboard documentation
<mgr-dashboard-overview>` for a feature overview and installation
instructions.
- The `ceph-rest-api` command-line tool (obsoleted by the MGR `restful`
module and deprecated since v12.2.5) has been dropped.
There is a MGR module called `restful` which provides similar
functionality via a "pass through" method. See
http://docs.ceph.com/docs/master/mgr/restful for details.
- New command to track throughput and IOPS statistics, also available in
`ceph -s` and previously in `ceph -w`. To use this command, enable the
`iostat` Manager module and invoke it using `ceph iostat`. See the
:ref:`iostat documentation <mgr-iostat-overview>` for details.
* *build/packaging*
- The `rcceph` script (`systemd/ceph` in the source code tree, shipped
as `/usr/sbin/rcceph` in the ceph-base package for CentOS and SUSE) has
been dropped. This script was used to perform admin operations (start,
stop, restart, etc.) on all OSD and/or MON daemons running on a given
machine. This functionality is provided by the systemd target units
(`ceph-osd.target`, `ceph-mon.target`, etc.).
- The python-ceph-compat package is declared deprecated, and will be
dropped when all supported distros have completed the move to Python 3.
It has already been dropped from those supported distros where Python 3
is standard and Python 2 is optional (currently only SUSE).
- Ceph codebase has now moved to the C++-17 standard.
- The Ceph LZ4 compression plugin is now enabled by default, and
introduces a new build dependency.
[1]: https://openattic.org
Getting Ceph ------------
* Git at git://github.com/ceph/ceph.git * Tarball at
http://download.ceph.com/tarballs/ceph-13.2.0.tar.gz * For packages, see
http://docs.ceph.com/docs/master/install/get-packages/ * Release git
sha1: f38fff5d093da678f6736c7a008511873c8d0fda -- To unsubscribe from
this list: send the line "unsubscribe ceph-devel" in the body of a
message to majordomo@xxxxxxxxxxxxxxx More majordomo info at
http://vger.kernel.org/majordomo-info.html
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com