Re: squid 19.1.1 RC QE validation status

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I can do the gibba upgrade after everything's approved.

On Mon, Aug 19, 2024 at 9:47 AM Yuri Weinstein <yweinste@xxxxxxxxxx> wrote:

> We need approval from Guillaume
>
> Laura, and gibba upgrade.
>
> On Mon, Aug 19, 2024 at 7:31 AM Laura Flores <lflores@xxxxxxxxxx> wrote:
>
>> Thanks @Adam King <adking@xxxxxxxxxx>!
>>
>> @Yuri Weinstein <yweinste@xxxxxxxxxx> the upgrade suites are approved.
>>
>> On Mon, Aug 19, 2024 at 9:28 AM Adam King <adking@xxxxxxxxxx> wrote:
>>
>>> https://tracker.ceph.com/issues/67583 didn't reproduce across 10 reruns
>>> https://pulpito.ceph.com/lflores-2024-08-16_00:04:51-upgrade:quincy-x-squid-release-distro-default-smithi/.
>>> Given the original failure was just "Unable to find image '
>>> quay.io/ceph/grafana:9.4.12' locally" which doesn't look very serious
>>> anyway, I don't think there's any reason for the failure to hold up the
>>> release
>>>
>>> On Thu, Aug 15, 2024 at 6:53 PM Laura Flores <lflores@xxxxxxxxxx> wrote:
>>>
>>>> The upgrade suites look mostly good to me, except for one tracker I
>>>> think would be in @Adam King <adking@xxxxxxxxxx>'s realm to look at.
>>>> If the new grafana issue below is deemed okay, then we can proceed with
>>>> approving the upgrade suite.
>>>>
>>>> *This issue stood out to me, where the cluster had trouble pulling the
>>>> grafana image locally to redeploy it. *@Adam King <adking@xxxxxxxxxx>* can
>>>> you take a look?*
>>>>
>>>>    - *https://tracker.ceph.com/issues/67583
>>>>    <https://tracker.ceph.com/issues/67583> - upgrade:quincy-x/stress-split:
>>>>    Cluster fails to redeploy grafana daemon after image is unable to be found
>>>>    locally*
>>>>
>>>>
>>>> Otherwise, tests failed from cluster log warnings that are expected
>>>> during upgrade tests. Many of these warnings have already been fixed and
>>>> are in the stages of getting backported.
>>>> I checked for each test that the cluster had upgraded all daemons to
>>>> 19.1.1, and that was the case.
>>>>
>>>>    - https://tracker.ceph.com/issues/66602 - rados/upgrade: Health
>>>>    check failed: 1 pool(s) do not have an application enabled
>>>>    (POOL_APP_NOT_ENABLED)
>>>>    - https://tracker.ceph.com/issues/65422 - upgrade/quincy-x: "1 pg
>>>>    degraded (PG_DEGRADED)" in cluster log
>>>>    - https://tracker.ceph.com/issues/67584 - upgrade:quincy-x: cluster
>>>>    [WRN] Health check failed: 1 osds down (OSD_DOWN)" in cluster log
>>>>    - https://tracker.ceph.com/issues/64460 - rados/upgrade: "[WRN]
>>>>    MON_DOWN: 1/3 mons down, quorum a,b" in cluster log
>>>>    - https://tracker.ceph.com/issues/66809 - upgrade/quincy-x;
>>>>    upgrade/reef-x: Health check failed: Reduced data availability: 1 pg
>>>>    peering (PG_AVAILABILITY)" in cluster log
>>>>
>>>>
>>>>
>>>> On Thu, Aug 15, 2024 at 11:55 AM Laura Flores <lflores@xxxxxxxxxx>
>>>> wrote:
>>>>
>>>>> Rados approved. Failures tracked here:
>>>>> https://tracker.ceph.com/projects/rados/wiki/SQUID#v1911-httpstrackercephcomissues67340
>>>>>
>>>>> On Thu, Aug 15, 2024 at 11:30 AM Yuri Weinstein <yweinste@xxxxxxxxxx>
>>>>> wrote:
>>>>>
>>>>>> Laura,
>>>>>>
>>>>>> The PR was cherry-picked and the `squid-release` branch was built.
>>>>>> Please review the run results in the tracker.
>>>>>>
>>>>>> On Wed, Aug 14, 2024 at 2:18 PM Laura Flores <lflores@xxxxxxxxxx>
>>>>>> wrote:
>>>>>> >
>>>>>> > Hey @Yuri Weinstein <yweinste@xxxxxxxxxx>,
>>>>>> >
>>>>>> > We've fixed a couple of issues and now need a few things rerun.
>>>>>> >
>>>>>> >
>>>>>> >    1. *Can you please rerun upgrade/reef-x and upgrade/quincy-x? *
>>>>>> >       - Reasoning: Many jobs in those suites died due to
>>>>>> >       https://tracker.ceph.com/issues/66883, which we deduced was
>>>>>> a recent
>>>>>> >       merge to teuthology. Now that the affecting commit was
>>>>>> reverted, we are
>>>>>> >       ready to have those rerun.
>>>>>> >    2. *Can you please cherry-pick
>>>>>> https://github.com/ceph/ceph/pull/58607
>>>>>> >    <https://github.com/ceph/ceph/pull/58607> to squid-release and
>>>>>> reschedule
>>>>>> >    rados:thrash-old-clients?*
>>>>>> >       - Reasoning: Since we stopped building focal for squid, we
>>>>>> can no
>>>>>> >       longer test squid against pacific clients.
>>>>>> >       - For this second RC, we had to make the decision to drop
>>>>>> pacific
>>>>>> >       from the *rados:thrash-old-clients* tests, which will now use
>>>>>> centos
>>>>>> >       9 stream packages to test against only reef and quincy
>>>>>> clients (
>>>>>> >       https://github.com/ceph/ceph/pull/58607).
>>>>>> >       - We have raised https://tracker.ceph.com/issues/67469 to
>>>>>> track the
>>>>>> >       implementation of a containerized solution for older clients
>>>>>> that don't
>>>>>> >       have centos 9 stream packages, so that we can reincorporate
>>>>>> > pacific in the
>>>>>> >       future.
>>>>>> >
>>>>>> > After these two things are rescheduled, we can proceed with a rados
>>>>>> suite
>>>>>> > approval and an upgrade suite approval.
>>>>>> >
>>>>>> > Thanks,
>>>>>> > Laura
>>>>>> >
>>>>>> > On Wed, Aug 14, 2024 at 12:49 PM Adam Emerson <aemerson@xxxxxxxxxx>
>>>>>> wrote:
>>>>>> >
>>>>>> > > On 14/08/2024, Yuri Weinstein wrote:
>>>>>> > > > Still waiting to hear back:
>>>>>> > > >
>>>>>> > > > rgw - Eric, Adam E
>>>>>> > >
>>>>>> > > Approved.
>>>>>> > >
>>>>>> > > (Sorry, I thought we were supposed to reply on the tracker.)
>>>>>> > > _______________________________________________
>>>>>> > > Dev mailing list -- dev@xxxxxxx
>>>>>> > > To unsubscribe send an email to dev-leave@xxxxxxx
>>>>>> > >
>>>>>> > >
>>>>>> >
>>>>>> > --
>>>>>> >
>>>>>> > Laura Flores
>>>>>> >
>>>>>> > She/Her/Hers
>>>>>> >
>>>>>> > Software Engineer, Ceph Storage <https://ceph.io>
>>>>>> >
>>>>>> > Chicago, IL
>>>>>> >
>>>>>> > lflores@xxxxxxx | lflores@xxxxxxxxxx <lflores@xxxxxxxxxx>
>>>>>> > M: +17087388804
>>>>>> > _______________________________________________
>>>>>> > ceph-users mailing list -- ceph-users@xxxxxxx
>>>>>> > To unsubscribe send an email to ceph-users-leave@xxxxxxx
>>>>>>
>>>>>>
>>>>>
>>>>> --
>>>>>
>>>>> Laura Flores
>>>>>
>>>>> She/Her/Hers
>>>>>
>>>>> Software Engineer, Ceph Storage <https://ceph.io>
>>>>>
>>>>> Chicago, IL
>>>>>
>>>>> lflores@xxxxxxx | lflores@xxxxxxxxxx <lflores@xxxxxxxxxx>
>>>>> M: +17087388804
>>>>>
>>>>>
>>>>>
>>>>
>>>> --
>>>>
>>>> Laura Flores
>>>>
>>>> She/Her/Hers
>>>>
>>>> Software Engineer, Ceph Storage <https://ceph.io>
>>>>
>>>> Chicago, IL
>>>>
>>>> lflores@xxxxxxx | lflores@xxxxxxxxxx <lflores@xxxxxxxxxx>
>>>> M: +17087388804
>>>>
>>>>
>>>>
>>
>> --
>>
>> Laura Flores
>>
>> She/Her/Hers
>>
>> Software Engineer, Ceph Storage <https://ceph.io>
>>
>> Chicago, IL
>>
>> lflores@xxxxxxx | lflores@xxxxxxxxxx <lflores@xxxxxxxxxx>
>> M: +17087388804
>>
>>
>>

-- 

Laura Flores

She/Her/Hers

Software Engineer, Ceph Storage <https://ceph.io>

Chicago, IL

lflores@xxxxxxx | lflores@xxxxxxxxxx <lflores@xxxxxxxxxx>
M: +17087388804
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux