Re: v19.2.1 Squid released

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



👌🥳🕺🕺
On Thu, 6 Feb 2025 at 2:56 PM, Yuri Weinstein <yweinste@xxxxxxxxxx> wrote:

> We're happy to announce the 1st backport release in the Squid series.
>
> https://ceph.io/en/news/blog/2025/v19-2-1-squid-released/
>
> Notable Changes
> ---------------
> * CephFS: The command `fs subvolume create` now allows tagging
> subvolumes by supplying the option
>   `--earmark` with a unique identifier needed for NFS or SMB services.
> The earmark
>   string for a subvolume is empty by default. To remove an already
> present earmark,
>   an empty string can be assigned to it. Additionally, the commands
>   `ceph fs subvolume earmark set`, `ceph fs subvolume earmark get`, and
>   `ceph fs subvolume earmark rm` have been added to set, get and
> remove earmark from a given subvolume.
>
> * CephFS: Expanded removexattr support for CephFS virtual extended
> attributes.
>   Previously one had to use setxattr to restore the default in order
> to "remove".
>   You may now properly use removexattr to remove. You can also now remove
> layout
>   on the root inode, which then will restore the layout to the default.
>
> * RADOS: A performance bottleneck in the balancer mgr module has been
> fixed.
>
>   Related Tracker: https://tracker.ceph.com/issues/68657
>
> * RADOS: Based on tests performed at scale on an HDD-based Ceph
> cluster, it was found
>   that scheduling with mClock was not optimal with multiple OSD shards. For
>   example, in the test cluster with multiple OSD node failures, the client
>   throughput was found to be inconsistent across test runs coupled with
> multiple
>   reported slow requests. However, the same test with a single OSD shard
> and
>   with multiple worker threads yielded significantly better results in
> terms of
>   consistency of client and recovery throughput across multiple test runs.
>   Therefore, as an interim measure until the issue with multiple OSD shards
>   (or multiple mClock queues per OSD) is investigated and fixed, the
> following
>   change to the default HDD OSD shard configuration is made:
>
>     - `osd_op_num_shards_hdd = 1` (was 5)
>     - `osd_op_num_threads_per_shard_hdd = 5` (was 1)
>
>   For more details, see https://tracker.ceph.com/issues/66289.
>
> * mgr/REST: The REST manager module will trim requests based on the
> 'max_requests' option.
>   Without this feature, and in the absence of manual deletion of old
> requests,
>   the accumulation of requests in the array can lead to Out Of Memory
> (OOM) issues,
>   resulting in the Manager crashing.
>
> Getting Ceph
> ------------
> * Git at git://github.com/ceph/ceph.git
> * Tarball at https://download.ceph.com/tarballs/ceph-19.2.1.tar.gz
> * Containers at https://quay.io/repository/ceph/ceph
> * For packages, see https://docs.ceph.com/en/latest/install/get-packages/
> * Release git sha1: 58a7fab8be0a062d730ad7da874972fd3fba59fb
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux