On Thu, Aug 15, 2024 at 11:50 AM Brad Hubbard <bhubbard@xxxxxxxxxx> wrote: > > On Tue, Aug 6, 2024 at 6:33 AM Yuri Weinstein <yweinste@xxxxxxxxxx> wrote: > > > > Details of this release are summarized here: > > > > https://tracker.ceph.com/issues/67340#note-1 > > > > Release Notes - N/A > > LRC upgrade - N/A > > Gibba upgrade -TBD > > > > Seeking approvals/reviews for: > > > > rados - Radek, Laura (https://github.com/ceph/ceph/pull/59020 is being > > tested and will be cherry-picked when ready) > > > > rgw - Eric, Adam E > > fs - Venky > > orch - Adam King > > rbd, krbd - Ilya > > > > quincy-x, reef-x - Laura, Neha > > > > powercycle - Brad > > https://pulpito.ceph.com/yuriw-2024-08-02_15:42:13-powercycle-squid-release-distro-default-smithi/7833420/ > is a problem with the cfuse_workunit_kernel_untar_build task where > it's failing to build the kernel, so a problem with the task itself I > believe at this point. > > https://pulpito.ceph.com/yuriw-2024-08-02_15:42:13-powercycle-squid-release-distro-default-smithi/7833422/ > is a problem with the cfuse_workunit_suites_ffsb task where it's > reporting > 2024-08-03T06:51:35.402 > INFO:tasks.workunit.client.0.smithi089.stdout:Probably out of disk > space I'm pretty sure the first of these, and possibly the second as well, are ceph-fuse issues and I've created https://tracker.ceph.com/issues/67565 and asked for input from the FS team. > > I'll chase these down, but I don't think they are powercycle issues > per se at this stage. I will prioritise identifying the specific root > cause however. > > APPROVED. > > > crimson-rados - Matan, Samuel > > > > ceph-volume - Guillaume > > > > Pls let me know if any tests were missed from this list. > > _______________________________________________ > > ceph-users mailing list -- ceph-users@xxxxxxx > > To unsubscribe send an email to ceph-users-leave@xxxxxxx > > > > > -- > Cheers, > Brad -- Cheers, Brad _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx