On behalf of @Radoslaw Zarzynski, rados approved.
Below is the summary of the rados suite failures, divided by component. @Adam King @Venky Shankar PTAL at the orch and cephfs failures to see if they are blockers.
Failures, unrelated:
RADOS:
1. https://tracker.ceph.com/issues/65183 - Overriding an EC pool needs the "--yes-i-really-mean-it" flag in addition to "force"
3. https://tracker.ceph.com/issues/62992 - Heartbeat crash in reset_timeout and clear_timeout
4. https://tracker.ceph.com/issues/58893 - test_map_discontinuity: AssertionError: wait_for_clean: failed before timeout expired
5. https://tracker.ceph.com/issues/61774 - centos 9 testing reveals rocksdb "Leak_StillReachable" memory leak in mons
7. https://tracker.ceph.com/issues/62776 - rados: cluster [WRN] overall HEALTH_WARN - do not have an application enabled
8. https://tracker.ceph.com/issues/59196 - ceph_test_lazy_omap_stats segfault while waiting for active+clean
Orchestrator:
1. https://tracker.ceph.com/issues/64208 - test_cephadm.sh: Container version mismatch causes job to fail.
CephFS:
1. https://tracker.ceph.com/issues/64946 - qa: unable to locate package libcephfs1
Teuthology:
1. https://tracker.ceph.com/issues/64727 - suites/dbench.sh: Socket exception: No route to host (113)
RADOS:
1. https://tracker.ceph.com/issues/65183 - Overriding an EC pool needs the "--yes-i-really-mean-it" flag in addition to "force"
3. https://tracker.ceph.com/issues/62992 - Heartbeat crash in reset_timeout and clear_timeout
4. https://tracker.ceph.com/issues/58893 - test_map_discontinuity: AssertionError: wait_for_clean: failed before timeout expired
5. https://tracker.ceph.com/issues/61774 - centos 9 testing reveals rocksdb "Leak_StillReachable" memory leak in mons
7. https://tracker.ceph.com/issues/62776 - rados: cluster [WRN] overall HEALTH_WARN - do not have an application enabled
8. https://tracker.ceph.com/issues/59196 - ceph_test_lazy_omap_stats segfault while waiting for active+clean
Orchestrator:
1. https://tracker.ceph.com/issues/64208 - test_cephadm.sh: Container version mismatch causes job to fail.
CephFS:
1. https://tracker.ceph.com/issues/64946 - qa: unable to locate package libcephfs1
Teuthology:
1. https://tracker.ceph.com/issues/64727 - suites/dbench.sh: Socket exception: No route to host (113)
On Tue, Apr 16, 2024 at 9:22 AM Yuri Weinstein <yweinste@xxxxxxxxxx> wrote:
And approval is needed for:
fs - Venky approved?
powercycle - seems fs related, Venky, Brad PTL
On Mon, Apr 15, 2024 at 5:55 PM Yuri Weinstein <yweinste@xxxxxxxxxx> wrote:
>
> Still waiting for approvals:
>
> rados - Radek, Laura approved? Travis? Nizamudeen?
>
> ceph-volume issue was fixed by https://github.com/ceph/ceph/pull/56857
>
> We plan not to upgrade the LRC to 18.2.3 as we are very close to the
> first squid RC and will be using it for this purpose.
> Please speak up if this may present any issues.
>
> Thx
>
> On Fri, Apr 12, 2024 at 11:37 AM Yuri Weinstein <yweinste@xxxxxxxxxx> wrote:
> >
> > Details of this release are summarized here:
> >
> > https://tracker.ceph.com/issues/65393#note-1
> > Release Notes - TBD
> > LRC upgrade - TBD
> >
> > Seeking approvals/reviews for:
> >
> > smoke - infra issues, still trying, Laura PTL
> >
> > rados - Radek, Laura approved? Travis? Nizamudeen?
> >
> > rgw - Casey approved?
> > fs - Venky approved?
> > orch - Adam King approved?
> >
> > krbd - Ilya approved
> > powercycle - seems fs related, Venky, Brad PTL
> >
> > ceph-volume - will require
> > https://github.com/ceph/ceph/pull/56857/commits/63fe3921638f1fb7fc065907a9e1a64700f8a600
> > Guillaume is fixing it.
> >
> > TIA
_______________________________________________
Dev mailing list -- dev@xxxxxxx
To unsubscribe send an email to dev-leave@xxxxxxx
Laura Flores
She/Her/Hers
Software Engineer, Ceph Storage
Chicago, IL
lflores@xxxxxxx | lflores@xxxxxxxxxx
M: +17087388804
|
_______________________________________________ Dev mailing list -- dev@xxxxxxx To unsubscribe send an email to dev-leave@xxxxxxx