Hi,
since you pointed out the CephFS features, I wanted to raise some
awareness towards snapshot schedulung/creating before releasing 19.2.0:
https://tracker.ceph.com/issues/67790
I tried 19.1.1 and am failing to create snapshots:
ceph01:~ # ceph fs subvolume snapshot create cephfs subvol1 test-snap1
Error EPERM: error in mkdir /volumes/_nogroup/subvol1/.snap/test-snap1
This works in Reef.
Thanks,
Eugen
Zitat von Yuri Weinstein <yweinste@xxxxxxxxxx>:
This is the second release candidate for Squid.
Feature highlights:
RGW
Fixed a regression in bucket ownership for Keystone users and
implicit tenants.
The User Accounts feature unlocks several new AWS-compatible IAM APIs
for the self-service management of users, keys, groups, roles, policy and
more.
RADOS
BlueStore has been optimized for better performance in
snapshot-intensive workloads.
BlueStore RocksDB LZ4 compression is now enabled by default to improve
average performance
and "fast device" space usage. Other improvements include more
flexible EC configurations,
an OpTracker to help debug mgr module issues, and better scrub scheduling.
Dashboard
* Rearranged Navigation Layout: The navigation layout has been reorganized
for improved usability and easier access to key features.
CephFS Improvements
* Support for managing CephFS snapshots and clones, as well as
snapshot schedule
management
* Manage authorization capabilities for CephFS resources
* Helpers on mounting a CephFS volume
RGW Improvements
* Support for managing bucket policies
* Add/Remove bucket tags
* ACL Management
* Several UI/UX Improvements to the bucket form
Monitoring: Grafana dashboards are now loaded into the container at
runtime rather than
building a grafana image with the grafana dashboards. Official Ceph
grafana images
can be found in quay.io/ceph/grafana
* Monitoring: RGW S3 Analytics: A new Grafana dashboard is now
available, enabling you to
visualize per bucket and user analytics data, including total GETs,
PUTs, Deletes,
Copies, and list metrics.
Crimson/Seastore
Crimson's first tech preview release!
Supporting RBD workloads on Replicated pools.
For more information please visit: https://ceph.io/en/news/crimson
If any of our community members would like to help us with performance
investigations or regression testing of the Squid release candidate,
please feel free to provide feedback via email or in
https://pad.ceph.com/p/squid_scale_testing. For more active
discussions, please use the #ceph-at-scale slack channel in
ceph-storage.slack.com.
* Git at git://github.com/ceph/ceph.git
* Tarball at https://download.ceph.com/tarballs/ceph-19.1.1.tar.gz
* Containers at https://quay.io/repository/ceph/ceph
* For packages, see https://docs.ceph.com/en/latest/install/get-packages/
* Release git sha1: 1d9f35852eef16b81614e38a05cf88b505cc142b
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
Dev mailing list -- dev@xxxxxxx
To unsubscribe send an email to dev-leave@xxxxxxx