Ceph Leadership Team meeting 2021-07-14

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

Today's meeting minutes:

- 16.2.5 shipped on time, next up is 15.2.13 (no particular urgency
  but should start rounding up PRs)

- move release containers to quay (Dimitri)
  - we are currently split between dockerhub and quay which causes
    confusion
  - dockerhub is full of legacy cruft, e.g. daemon-base images which
    are used only by ceph-ansible and ceph-nano
  - only new-style images/tags would be pushed to quay

- update release email/blog template to include links to the container
  registry web UI (David)
- improve release notes script (Josh)
  - ideally get to the point where the output doesn't require any
    manual massaging

- Guillaume is taking over ceph-volume maintainership from Jan
- ceph.io team page is woefully out of date
  - https://github.com/ceph/ceph.io/pull/247

- high-level development priorities doc (why as opposed to what)
  - https://docs.google.com/document/d/1kF8GEXUwB8y-SKZP6TM9mhfYluldEw_2D2qxyOy0p74/edit

- need to flesh out a strategy for local storage and replica-1 use cases
  - want raw devices for OSDs but also the ability to carve out chunks
    for db and wal
  - existing solutions seem incomplete
  - introduce rook-local (rook on bare metal) operator?
  - replica-1 use cases
    - scratch storage for playing around, prototyping, etc
    - storage for workloads such as mongodb that do their own
      replication
  - offload these to the operator or bite the bullet and make replica-1
    corner case work well?
    - a bit too complicated of a stack for something that
      could be just a local partition but things like mirroring would
      still work
    - must avoid spreading replica-1 PVs across OSDs
      - could be pgp_num = 1 or a custom CRUSH rule

- rados suite environmental issues are being worked out (centos.stream
  dependencies, selinux)
- fs:workload suite migrated to cephadm seems to be exposing a race in
  podman/runc related to starting containers
  - https://github.com/ceph/ceph/pull/42000

- component suites migrated to cephadm can pick a single distro
  - we have enough distro coverage in cephadm suite

Thanks,

                Ilya
_______________________________________________
Dev mailing list -- dev@xxxxxxx
To unsubscribe send an email to dev-leave@xxxxxxx



[Index of Archives]     [CEPH Users]     [Ceph Devel]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux