Re: quincy v17.2.6 QE Validation status

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I reviewed the upgrade tests.

I opened two new trackers:
1. https://tracker.ceph.com/issues/59121 - "No upgrade in progress" during
upgrade tests - Ceph - Orchestrator
2. https://tracker.ceph.com/issues/59124 - "Health check failed: 1/3 mons
down, quorum b,c (MON_DOWN)" during quincy p2p upgrade test - Ceph - RADOS

Starred (*) trackers occurred frequently throughout the suites.

https://pulpito.ceph.com/yuriw-2023-03-14_21:33:13-upgrade:octopus-x-quincy-release-distro-default-smithi/
Failures:
    1. *https://tracker.ceph.com/issues/59121
<https://tracker.ceph.com/issues/59121> ***
    2. *https://tracker.ceph.com/issues/56393
<https://tracker.ceph.com/issues/56393> ***
    3. https://tracker.ceph.com/issues/53615
Details:
    1.* "No upgrade in progress" during upgrade tests - Ceph - Orchestrator*
    2. *thrash-erasure-code-big: failed to complete snap trimming before
timeout - Ceph - RADOS*
    3. qa: upgrade test fails with "timeout expired in wait_until_healthy"
- Ceph - CephFS

https://pulpito.ceph.com/yuriw-2023-03-15_21:14:59-upgrade:pacific-x-quincy-release-distro-default-smithi/
Failures:
    1. *https://tracker.ceph.com/issues/56393
<https://tracker.ceph.com/issues/56393> ***
    2. https://tracker.ceph.com/issues/58914
    3. *https://tracker.ceph.com/issues/59121
<https://tracker.ceph.com/issues/59121> ***
    4. https://tracker.ceph.com/issues/59123
Details:
    1. *thrash-erasure-code-big: failed to complete snap trimming before
timeout - Ceph - RADOS*
    2. [ FAILED ] TestClsRbd.group_snap_list_max_read in
upgrade:quincy-x-reef - Ceph - RBD
    3. *"No upgrade in progress" during upgrade tests*
    4. Timeout opening channel - Tools - Teuthology

https://pulpito.ceph.com/yuriw-2023-03-14_21:36:24-upgrade:quincy-p2p-quincy-release-distro-default-smithi/
Failures:
    1. https://tracker.ceph.com/issues/59124
Details:
    1. "Health check failed: 1/3 mons down, quorum b,c (MON_DOWN)" during
quincy p2p upgrade test - Ceph - RADOS

https://pulpito.ceph.com/yuriw-2023-03-15_15:25:50-upgrade-clients:client-upgrade-octopus-quincy-quincy-release%C2%A0-distro-default-smithi/
1 test failed from failure to fetch package

https://pulpito.ceph.com/yuriw-2023-03-15_15:26:37-upgrade-clients:client-upgrade-pacific-quincy-quincy-release-distro-default-smithi/
All green

On Tue, Mar 21, 2023 at 3:06 PM Yuri Weinstein <yweinste@xxxxxxxxxx> wrote:

> Details of this release are summarized here:
>
> https://tracker.ceph.com/issues/59070#note-1
> Release Notes - TBD
>
> The reruns were in the queue for 4 days because of some slowness issues.
> The core team (Neha, Radek, Laura, and others) are trying to narrow
> down the root cause.
>
> Seeking approvals/reviews for:
>
> rados - Neha, Radek, Travis, Ernesto, Adam King (we still have to test
> and merge at least one PR https://github.com/ceph/ceph/pull/50575 for
> the core)
> rgw - Casey
> fs - Venky (the fs suite has an unusually high amount of failed jobs,
> any reason to suspect it in the observed slowness?)
> orch - Adam King
> rbd - Ilya
> krbd - Ilya
> upgrade/octopus-x - Laura is looking into failures
> upgrade/pacific-x - Laura is looking into failures
> upgrade/quincy-p2p - Laura is looking into failures
> client-upgrade-octopus-quincy-quincy - missing packages, Adam Kraitman
> is looking into it
> powercycle - Brad
> ceph-volume - needs a rerun on merged
> https://github.com/ceph/ceph-ansible/pull/7409
>
> Please reply to this email with approval and/or trackers of known
> issues/PRs to address them.
>
> Also, share any findings or hypnosis about the slowness in the
> execution of the suite.
>
> Josh, Neha - gibba and LRC upgrades pending major suites approvals.
> RC release - pending major suites approvals.
>
> Thx
> YuriW
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
>

-- 

Laura Flores

She/Her/Hers

Software Engineer, Ceph Storage <https://ceph.io>

Chicago, IL

lflores@xxxxxxx | lflores@xxxxxxxxxx <lflores@xxxxxxxxxx>
M: +17087388804
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux