Thank you all!
We want to merge the PR with whitelisting added https://github.com/ceph/ceph/pull/55717 and will start the 16.2.15 build/release afterward.
On Mon, Feb 26, 2024 at 8:25 AM Laura Flores <lflores@xxxxxxxxxx> wrote:
Thank you Junior for your thorough review of the RADOS suite. Aside from a few remaining warnings in the final run that could benefit from whitelisting, these are not blockers.Rados-approved.On Mon, Feb 26, 2024 at 9:29 AM Kamoltat Sirivadhna <ksirivad@xxxxxxxxxx> wrote:details of RADOS run analysis:
yuriw-2024-02-19_19:25:49-rados-pacific-release-distro-default-smithi
<https://pulpito.ceph.com/yuriw-2024-02-19_19:25:49-rados-pacific-release-distro-default-smithi/#collapseOne>
1. https://tracker.ceph.com/issues/64455 task/test_orch_cli: Health
check failed: cephadm background work is paused (CEPHADM_PAUSED)" in
cluster log (White list)
2. https://tracker.ceph.com/issues/64454 rados/cephadm/mgr-nfs-upgrade:
Health check failed: 1 stray daemon(s) not managed by cephadm
(CEPHADM_STRAY_DAEMON)" in cluster log (whitelist)
3. https://tracker.ceph.com/issues/63887: Starting alertmanager fails
from missing container (happens in Pacific)
4. Failed to reconnect to smithi155 [7566763
<https://pulpito.ceph.com/yuriw-2024-02-19_19:25:49-rados-pacific-release-distro-default-smithi/7566763>
]
5. https://tracker.ceph.com/issues/64278 Unable to update caps for
client.iscsi.iscsi.a (known failures)
6. https://tracker.ceph.com/issues/64452 Teuthology runs into
"TypeError: expected string or bytes-like object" during log scraping
(teuthology failure)
7. https://tracker.ceph.com/issues/64343 Expected warnings that need to
be whitelisted cause rados/cephadm tests to fail for 7566717
<https://pulpito.ceph.com/yuriw-2024-02-19_19:25:49-rados-pacific-release-distro-default-smithi/7566717>
we neeed to add (ERR|WRN|SEC)
8. https://tracker.ceph.com/issues/58145 orch/cephadm: nfs tests failing
to mount exports (mount -t nfs 10.0.31.120:/fake /mnt/foo' fails)
7566724 (resolved issue re-opened)
9. https://tracker.ceph.com/issues/63577 cephadm:
docker.io/library/haproxy: toomanyrequests: You have reached your pull
rate limit.
10. https://tracker.ceph.com/issues/54071 rdos/cephadm/osds: Invalid
command: missing required parameter hostname(<string>) 756674
<https://pulpito.ceph.com/yuriw-2024-02-19_19:25:49-rados-pacific-release-distro-default-smithi/7566747>
Note:
1. Although 7566762 seems like a different failure from what is
displayed in pulpito, in the teuth log it failed because of
https://tracker.ceph.com/issues/64278.
2. rados/cephadm/thrash/ … failed a lot because of
https://tracker.ceph.com/issues/64452
3. 7566717
<https://pulpito.ceph.com/yuriw-2024-02-19_19:25:49-rados-pacific-release-distro-default-smithi/7566717>.
failed because we didn’t whitelist (ERR|WRN|SEC) :tasks.cephadm:Checking
cluster log for badness...
4. 7566724 https://tracker.ceph.com/issues/58145 ganesha seems resolved
1 year ago, but popped up again so re-opened tracker and ping Adam King
(resolved)
7566777, 7566781, 7566796 are due to https://tracker.ceph.com/issues/63577
White List and re-ran:
yuriw-2024-02-22_21:39:39-rados-pacific-release-distro-default-smithi/
<https://pulpito.ceph.com/yuriw-2024-02-22_21:39:39-rados-pacific-release-distro-default-smithi/>
rados/cephadm/mds_upgrade_sequence/ —> failed to shutdown mon (known
failure discussed with A.King)
rados/cephadm/mgr-nfs-upgrade —> failed to shutdown mon (known failure
discussed with A.King)
rados/cephadm/osds —> zap disk error (known failure)
rados/cephadm/smoke-roleless —> toomanyrequests: You have reached your
pull rate limit. https://www.docker.com/increase-rate-limit. (known
failures)
rados/cephadm/thrash —> Just needs to whitelist (CACHE_POOL_NEAR_FULL)
(known failures)
rados/cephadm/upgrade —> CEPHADM_FAILED_DAEMON (WRN) node-exporter (known
failure discussed with A.King)
rados/cephadm/workunits —> known failure:
https://tracker.ceph.com/issues/63887
On Mon, Feb 26, 2024 at 10:22 AM Kamoltat Sirivadhna <ksirivad@xxxxxxxxxx>
wrote:
> RADOS approved
>
> On Wed, Feb 21, 2024 at 11:27 AM Yuri Weinstein <yweinste@xxxxxxxxxx>
> wrote:
>
>> Still seeking approvals:
>>
>> rados - Radek, Junior, Travis, Adam King
>>
>> All other product areas have been approved and are ready for the release
>> step.
>>
>> Pls also review the Release Notes:
>> https://github.com/ceph/ceph/pull/55694
>>
>>
>> On Tue, Feb 20, 2024 at 7:58 AM Yuri Weinstein <yweinste@xxxxxxxxxx>
>> wrote:
>> >
>> > We have restarted QE validation after fixing issues and merging several
>> PRs.
>> > The new Build 3 (rebase of pacific) tests are summarized in the same
>> > note (see Build 3 runs) https://tracker.ceph.com/issues/64151#note-1
>> >
>> > Seeking approvals:
>> >
>> > rados - Radek, Junior, Travis, Ernesto, Adam King
>> > rgw - Casey
>> > fs - Venky
>> > rbd - Ilya
>> > krbd - Ilya
>> >
>> > upgrade/octopus-x (pacific) - Adam King, Casey PTL
>> >
>> > upgrade/pacific-p2p - Casey PTL
>> >
>> > ceph-volume - Guillaume, fixed by
>> > https://github.com/ceph/ceph/pull/55658 retesting
>> >
>> > On Thu, Feb 8, 2024 at 8:43 AM Casey Bodley <cbodley@xxxxxxxxxx> wrote:
>> > >
>> > > thanks, i've created https://tracker.ceph.com/issues/64360 to track
>> > > these backports to pacific/quincy/reef
>> > >
>> > > On Thu, Feb 8, 2024 at 7:50 AM Stefan Kooman <stefan@xxxxxx> wrote:
>> > > >
>> > > > Hi,
>> > > >
>> > > > Is this PR: https://github.com/ceph/ceph/pull/54918 included as
>> well?
>> > > >
>> > > > You definitely want to build the Ubuntu / debian packages with the
>> > > > proper CMAKE_CXX_FLAGS. The performance impact on RocksDB is _HUGE_.
>> > > >
>> > > > Thanks,
>> > > >
>> > > > Gr. Stefan
>> > > >
>> > > > P.s. Kudos to Mark Nelson for figuring it out / testing.
>> > > > _______________________________________________
>> > > > ceph-users mailing list -- ceph-users@xxxxxxx
>> > > > To unsubscribe send an email to ceph-users-leave@xxxxxxx
>> > > >
>> > >
>> _______________________________________________
>> ceph-users mailing list -- ceph-users@xxxxxxx
>> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>>
>
>
> --
>
> Kamoltat Sirivadhna (HE/HIM)
>
> SoftWare Engineer - Ceph Storage
>
> ksirivad@xxxxxxxxxx T: (857) <(919)716-5348>253-8927
>
>
--
Kamoltat Sirivadhna (HE/HIM)
SoftWare Engineer - Ceph Storage
ksirivad@xxxxxxxxxx T: (857) <(919)716-5348>253-8927
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
--Laura Flores
She/Her/Hers
Software Engineer, Ceph Storage
Chicago, IL
lflores@xxxxxxx | lflores@xxxxxxxxxx
M: +17087388804
_______________________________________________ Dev mailing list -- dev@xxxxxxx To unsubscribe send an email to dev-leave@xxxxxxx