Ceph Quarterly (CQ) - Issue #2

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



The second issue of "Ceph Quarterly" is attached to this email. Ceph Quarterly (or "CQ") is an overview of the past three months of upstream Ceph development. We provide CQ in three formats: A4, letter, and plain text wrapped at 80 columns.

Two news items arrived after the deadline for typesetting this issue. They are included here:

Grace Hopper Open Source Day 2023:

- On 22 Sep 2023, Ceph participated in Grace Hopper Open Source Day, an all-day hackathon for women and nonbinary developers. Laura Flores led the Ceph division, and Yaarit Hatuka, Shreyansh Sancheti,and Aishwarya Mathuria participated as mentors. From 12pm EST to 7:30pm EST, Laura showed more than 40 attendees how to run a Ceph vstart cluster in an Ubuntu Docker container. Yaarit, Shreyansh,and Aishwarya spent the day working one-on-one with attendees, helping them troubleshoot and work through a curated list of low-hanging-fruit issues. By the day's end, Grace Hopper attendees submitted eight pull requests. As of the publication of this sentence, two have been merged and the others are expected to be merged soon.

- For more information about GHC Open Source Day, see https://ghc.anitab.org/awards-programs/open-source-day/

Ceph partners with RCOS:

- Ceph has partnered for the first time with the Rensselaer Center for Open Source (RCOS), an organization at Rensselaer Polytechnic Institute that helps students jumpstart their careers in software by giving them the opportunity to work on various open source projects for class credit.

- Laura Flores, representing Ceph, is mentoring three RPI students on a project to improve the output of the `ceph balancer status` command.

- For more information about RCOS, see https://rcos.io/

Zac Dover
Upstream DocumentationCeph Foundation
Ceph Quarterly
October 2023

Summary of Developments in Q3
-----------------------------

CephFS:

A non-blocking I/O API for libcephfs has been added to Ceph:
https://github.com/ceph/ceph/pull/48038

A cause of potential deadlock in Python libcephfs has been fixed, which also
affected the mgr modules using it: https://github.com/ceph/ceph/pull/52290

MDS: acquisition throttles have been adjusted to more sensible defaults:
https://github.com/ceph/ceph/pull/52577

MDS: Buggy clients are now evicted in order to keep MDS available:
https://github.com/ceph/ceph/pull/52944


Cephadm:

Support for init containers has been added. Init containers allow custom
actions to run before the daemon container starts:
https://github.com/ceph/ceph/pull/52178

We announce the deployment of the NVMe-oF gateway:
https://github.com/ceph/ceph/pull/50423,
https://github.com/ceph/ceph/pull/52691

LV devices are now reported by ceph-volume in the inventory list, and can be
prepared as OSDs: https://github.com/ceph/ceph/pull/52877

cephadm is now split into multiple files in order to make it easier for humans
to read and understand.  A new procedure has been added to the documentation
that describes how to acquire this new version of cephadm:
https://github.com/ceph/ceph/pull/53052
https://docs.ceph.com/en/latest/cephadm/install/#install-cephadm


Crimson:

Support for multicore has been added to Crimson:
https://github.com/ceph/ceph/pull/51147,
https://github.com/ceph/ceph/pull/51770,
https://github.com/ceph/ceph/pull/51916,
https://github.com/ceph/ceph/pull/52306

Infrastructure to support erasure coding has been added to Crimson:
https://github.com/ceph/ceph/pull/52211

We announce the introduction of a performance test suite for Crimson:
https://github.com/ceph/ceph/pull/50458


Dashboard:

RGW multisite configuration can now be imported from a secondary cluster or
exported to a secondary cluster: https://github.com/ceph/ceph/pull/50706

We accounce several upgrades to the Cluster User Interface and the Cluster API:
https://github.com/ceph/ceph/pull/52351,
https://github.com/ceph/ceph/pull/52395,
https://github.com/ceph/ceph/pull/52903,
https://github.com/ceph/ceph/pull/52919,
https://github.com/ceph/ceph/pull/52222,
https://github.com/ceph/ceph/pull/53022

More detail has been added to the RGW overview. This includes more granular
information about daemons, zoning, buckets, sers, used capacity (the capacity
used by all the pools in the cluster). Cards detailing these assets have been
added to the rgw overview dashboard.: https://github.com/ceph/ceph/pull/52317,
https://github.com/ceph/ceph/pull/52405,
https://github.com/ceph/ceph/pull/52915

It is now possible to manage CephFS subvolumes from the dashboard. This
includes creating subvolumes, editing subvolumes, removing subvolumes, creating
subvolume groups, editing subvolume groups, removing subvolume groups, removing
subvolume groups with snapshots, and displaying subvolume groups in the CephFS
subvolume tab: https://github.com/ceph/ceph/pull/52786,
https://github.com/ceph/ceph/pull/52861,
https://github.com/ceph/ceph/pull/52869,
https://github.com/ceph/ceph/pull/52886,
https://github.com/ceph/ceph/pull/52898,
https://github.com/ceph/ceph/pull/53018,
https://github.com/ceph/ceph/pull/53182,
https://github.com/ceph/ceph/pull/53246


RADOS:

Improved robustness against disk corruption of key data structures - OSD
superblock data is now replicated in an onode's OMAP, which makes recovery
possible even when onode data is corrupted:
https://github.com/ceph/ceph/pull/50326

We introduce a disk-fragmentation histogram for bluestore, accessible through
an admin socket: https://github.com/ceph/ceph/pull/51820

The amount of metadata necessary to create a snapshot has been significantly
reduced, making snapshot creation and deletion more efficient:
https://github.com/ceph/ceph/pull/53178

ceph-mgr is now more resilient to blocklisting - cluster connections between
MGRs and RADOS clients are now reopened when this occurs:
https://github.com/ceph/ceph/pull/50291

LZ4 compression can now be enabled for Bluestore RocksDB:
https://github.com/ceph/ceph/pull/53343


RBD:

Windows driver now runs RBD functional tests in CI:
https://github.com/ceph/ceph/pull/50141

The handling of blocklisting in rbd-mirror and krbd has been improved:
https://github.com/ceph/ceph/pull/52990, Linux kernel 6.5-rc5

Various rbd-mirror bug fixes: https://github.com/ceph/ceph/pull/52057,
https://github.com/ceph/ceph/pull/52086,
https://github.com/ceph/ceph/pull/53251


RGW:

Support for versioned objects has been added to emergency bucket repair tools,
and rgw-restore-bucket-index can now recognize versioned buckets and RGW
objects that have names beginning with underscores:
https://github.com/ceph/ceph/pull/51071

Multisite sync load is now distributed farily across gateways:
https://github.com/ceph/ceph/pull/51493

Several issues have been fixed, and Trino/TPCDS benchmark now runs with
s3select: https://github.com/ceph/ceph/pull/52651,
https://github.com/ceph/s3select/pull/132

An initial version of a posix-fs backend for RGW has been developed:
https://github.com/ceph/ceph/pull/52933


Telemetry:

Over the past two months, Telemetry reports a storage capacity of 27PB from
~170 Reef clusters. Approximately 2,800 clusters report ~1.2EB, including the
27PB from the Reef clusters. See telemetry-public.ceph.com


User+Dev Meeting Relaunched:

Neha Ojha and Laura Flores organized the relaunch of the User+Dev meeting,
which was held on 21 Sep 2023. Two members of the Ceph upstream community gave
presentations at this meeting. These were: Cory Snyder of 11:11 Systems's "What
To Do When Ceph isn't Cephing"
(https://ceph.io/assets/pdfs/user_dev_meeting_2023_09_21_cory_snyder.pdf) and
Jonas Sterr of Thomas Krenn AG's "Ceph Usability Improvements"
(https://ceph.io/assets/pdfs/user_dev_meeting_2023_09_21_jonas_sterr.pdf). The
minutes of this meeting are available here:
https://pad.ceph.com/p/ceph-user-dev-monthly-minutes.

The User+Dev Meeting is held at 14:00 UTC on the third Thursday of every month.

https://meet.jit.si/ceph-user-dev-monthly is where the meeting is held.



CQ is a production of the Ceph Foundation. To support or join the Ceph Foundation, contact membership@xxxxxxxxxxxxxxxxxxx.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux