Re: [ceph-users] v19.2.0 Squid released

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



We have pushed a new 19.2.0 container image that uses ganesha v5.5 rather than 6. For those who hit this issue, rerunning the `ceph orch upgrade` command needed to upgrade to the original 19.2.0 image again (ceph orch upgrade start quay.io/ceph/ceph:v19.2.0) was tested and confirmed to get the nfs daemon running again, with the caveat that the `mgr/cephadm/use_repo_digest` config option must be set to true for cephadm to be able to handle upgrading to a floating tag image that has been modified since the previous upgrade. For those who haven't upgraded yet but were using both cephadm and nfs, it should now be safe to perform this upgrade.

On Fri, Sep 27, 2024 at 11:40 AM Adam King <adking@xxxxxxxxxx> wrote:
WARNING, if you're using cephadm and nfs please don't upgrade to this release for the time being. There are compatibility issues with cephadm's deployment of the NFS daemon and ganesha v6 which made its way into the release container.

On Thu, Sep 26, 2024 at 6:20 PM Laura Flores <lflores@xxxxxxxxxx> wrote:
We're very happy to announce the first stable release of the Squid series.

We express our gratitude to all members of the Ceph community who
contributed by proposing pull requests, testing this release, providing
feedback, and offering valuable suggestions.

Highlights:

RADOS
* BlueStore has been optimized for better performance in snapshot-intensive
workloads.
* BlueStore RocksDB LZ4 compression is now enabled by default to improve
average performance and "fast device" space usage.
* Other improvements include more flexible EC configurations, an OpTracker
to help debug mgr module issues, and better scrub scheduling.

Dashboard
* Improved navigation layout

CephFS
* Support for managing CephFS snapshots and clones, as well as snapshot
schedule management
* Manage authorization capabilities for CephFS resources
* Helpers on mounting a CephFS volume

RBD
* diff-iterate can now execute locally, bringing a dramatic performance
improvement for QEMU live disk synchronization and backup use cases.
* Support for cloning from non-user type snapshots is added.
* rbd-wnbd driver has gained the ability to multiplex image mappings.

RGW
* The User Accounts feature unlocks several new AWS-compatible IAM APIs for
the self-service management of users, keys, groups, roles, policy and more.

Crimson/Seastore
* Crimson's first tech preview release! Supporting RBD workloads on
Replicated pools. For more information please visit:
https://ceph.io/en/news/crimson

We encourage you to read the full release notes at
https://ceph.io/en/news/blog/2024/v19-2-0-squid-released/

* Git at git://github.com/ceph/ceph.git
* Tarball at https://download.ceph.com/tarballs/ceph-19.2.0.tar.gz
* Containers at https://quay.io/repository/ceph/ceph
* For packages, see https://docs.ceph.com/en/latest/install/get-packages/
* Release git sha1: 16063ff2022298c9300e49a547a16ffda59baf13

--

Laura Flores

She/Her/Hers

Software Engineer, Ceph Storage <https://ceph.io>

Chicago, IL

lflores@xxxxxxx | lflores@xxxxxxxxxx <lflores@xxxxxxxxxx>
M: +17087388804
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

_______________________________________________
Dev mailing list -- dev@xxxxxxx
To unsubscribe send an email to dev-leave@xxxxxxx

[Index of Archives]     [CEPH Users]     [Ceph Devel]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux