Yeah, we'll make sure the container images are built before announcing it.
On 5/28/20 1:30 PM, David Orman wrote:
Due to the impact/severity of this issue, can we make sure the docker
images are pushed simultaneously for those of us using
cephadm/containers (with the last release, there was a significant
delay)? I'm glad the tempfix is being put into place in short-order,
thank you for the expedient turnaround and understanding.
On Thu, May 28, 2020 at 3:03 PM Josh Durgin <jdurgin@xxxxxxxxxx
<mailto:jdurgin@xxxxxxxxxx>> wrote:
Hi Paul, we're planning to release 15.2.3 with the workaround [0]
tomorrow, so folks don't have to worry as we work on a more complete
fix.
Josh
[0] https://github.com/ceph/ceph/pull/35293
On 5/27/20 6:27 AM, Paul Emmerich wrote:
> Hi,
>
> since this bug may lead to data loss when several OSDs crash at
the same
> time (e.g., after a power outage): can we pull the release from the
> mirrors and docker hub?
>
> Paul
>
> --
> Paul Emmerich
>
> Looking for help with your Ceph cluster? Contact us at
https://croit.io
>
> croit GmbH
> Freseniusstr. 31h
> 81247 München
> www.croit.io <http://www.croit.io> <http://www.croit.io>
> Tel: +49 89 1896585 90
>
>
> On Wed, May 20, 2020 at 7:18 PM Josh Durgin <jdurgin@xxxxxxxxxx
<mailto:jdurgin@xxxxxxxxxx>
> <mailto:jdurgin@xxxxxxxxxx <mailto:jdurgin@xxxxxxxxxx>>> wrote:
>
> Hi folks, at this time we recommend pausing OSD upgrades to
15.2.2.
>
> There have been a couple reports of OSDs crashing due to rocksdb
> corruption after upgrading to 15.2.2 [1] [2]. It's safe to
upgrade
> monitors and mgr, but OSDs and everything else should wait.
>
> We're investigating and will get a fix out as soon as we can. You
> can follow progress on this tracker:
>
> https://tracker.ceph.com/issues/45613
>
> Josh
>
> [1]
>
https://lists.ceph.io/hyperkitty/list/ceph-users@xxxxxxx/thread/CX5PRFGL6UBFMOJC6CLUMLPMT4B2CXVQ/
> [2]
>
https://lists.ceph.io/hyperkitty/list/ceph-users@xxxxxxx/thread/CWN7BNPGSRBKZHUF2D7MDXCOAE3U2ERU/
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
<mailto:ceph-users@xxxxxxx>
> <mailto:ceph-users@xxxxxxx <mailto:ceph-users@xxxxxxx>>
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
<mailto:ceph-users-leave@xxxxxxx>
> <mailto:ceph-users-leave@xxxxxxx
<mailto:ceph-users-leave@xxxxxxx>>
>
>
> _______________________________________________
> Dev mailing list -- dev@xxxxxxx <mailto:dev@xxxxxxx>
> To unsubscribe send an email to dev-leave@xxxxxxx
<mailto:dev-leave@xxxxxxx>
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
<mailto:ceph-users@xxxxxxx>
To unsubscribe send an email to ceph-users-leave@xxxxxxx
<mailto:ceph-users-leave@xxxxxxx>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx