Re: 16.2.11 branch

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Oct 28, 2022 at 8:51 AM Laura Flores <lflores@xxxxxxxxxx> wrote:
>
> Hi Christian,
>
> There also is https://tracker.ceph.com/versions/656 which seems to be
> > tracking
> > the open issues tagged for this particular point release.
> >
>
> Yes, thank you for providing the link.
>
> If you don't mind me asking Laura, have those issues regarding the
> > testing lab been resolved yet?
> >
>
> There are currently a lot of folks working to fix the testing lab issues.
> Essentially, disk corruption affected our ability to reach quay.ceph.io.
> We've made progress this morning, but we are still working to understand
> the root cause of the corruption. We expect to re-deploy affected services
> soon so we can resume testing for v16.2.11.

We got a note about this today, so I wanted to clarify:

For Reasons, the sepia lab we run teuthology in currently uses a Red
Hat Enterprise Virtualization stack — meaning, mostly KVM with a lot
of fancy orchestration all packaged up, backed by Gluster. (Yes,
really — a full Ceph integration was never built and at one point this
was deemed the most straightforward solution compared to running
all-up OpenStack backed by Ceph, which would have been the available
alternative.) The disk images stored in Gluster started reporting
corruption last week (though Gluster was claiming to be healthy), and
with David's departure and his backup on vacation it took a while for
the remaining team members to figure out what was going on and
identify strategies to resolve or work around it.

The relevant people have figured out a lot more of what was going on,
and Adam (David's backup) is back now so we're expecting things to
resolve more quickly at this point. And indeed the team's looking at
other options for providing this infrastructure going forward. :)
-Greg

>
> You can follow updates on the two Tracker issues below:
>
>    1. https://tracker.ceph.com/issues/57914
>    2. https://tracker.ceph.com/issues/57935
>
>
> There are quite a few bugfixes in the pending release 16.2.11 which we
> > are waiting for. TBH I was about

> > to ask if it would not be sensible to do an intermediate release and not
> > let it grow bigger and
> > bigger (with even more changes / fixes)  going out at once.
> >
>
> Fixes for v16.2.11 are pretty much paused at this point; the bottleneck
> lies in getting some outstanding patches tested before they are backported.
> Whether we stop now or continue to introduce more patches, the timeframe
> for getting things tested remains the same.
>
> I hope this clears up some of the questions.
>
> Thanks,
> Laura Flores
>
>
> On Fri, Oct 28, 2022 at 9:41 AM Christian Rohmann <
> christian.rohmann@xxxxxxxxx> wrote:
>
> > On 28/10/2022 00:25, Laura Flores wrote:
> > > Hi Oleksiy,
> > >
> > > The Pacific RC has not been declared yet since there have been problems
> > in
> > > our upstream testing lab. There is no ETA yet for v16.2.11 for that
> > reason,
> > > but the full diff of all the patches that were included will be published
> > > to ceph.io when v16.2.11 is released. There will also be a diff
> > published
> > > in the documentation on this page:
> > > https://docs.ceph.com/en/latest/releases/pacific/
> > >
> > > In the meantime, here is a link to the diff in commits between v16.2.10
> > and
> > > the Pacific branch:
> > https://github.com/ceph/ceph/compare/v16.2.10...pacific
> >
> > There also is https://tracker.ceph.com/versions/656 which seems to be
> > tracking
> > the open issues tagged for this particular point release.
> >
> >
> > If you don't mind me asking Laura, have those issues regarding the
> > testing lab been resolved yet?
> >
> > There are quite a few bugfixes in the pending release 16.2.11 which we
> > are waiting for. TBH I was about
> > to ask if it would not be sensible to do an intermediate release and not
> > let it grow bigger and
> > bigger (with even more changes / fixes)  going out at once.
> >
> >
> >
> > Regards
> >
> >
> > Christian
> >
> >
>
> --
>
> Laura Flores
>
> She/Her/Hers
>
> Software Engineer, Ceph Storage
>
> Red Hat Inc. <https://www.redhat.com>
>
> Chicago, IL
>
> lflores@xxxxxxxxxx
> M: +17087388804
> @RedHat <https://twitter.com/redhat>   Red Hat
> <https://www.linkedin.com/company/red-hat>  Red Hat
> <https://www.facebook.com/RedHatInc>
> <https://www.redhat.com>
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
>

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux