Re: ceph-volume lvm activate --all broken in 14.2.3

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Alfredo,

I have seen that you posted a fix.   Is this becomes a part of the standard package update or I need to custom build it?  I am running clusters with podman and docker.

On Thu, Sep 5, 2019, 6:56 AM Alfredo Deza <adeza@xxxxxxxxxx> wrote:
While we try to fix this, in the meantime the only workaround is not
to redirect stderr. This is far from ideal if you require redirection,
but so far is the only workaround to avoid this problem.


On Wed, Sep 4, 2019 at 7:54 PM Alfredo Deza <adeza@xxxxxxxxxx> wrote:
>
> On Wed, Sep 4, 2019 at 6:35 PM Sasha Litvak
> <alexander.v.litvak@xxxxxxxxx> wrote:
> >
> > How do you fix it?  Or you wait till 14.2.4?
>
> This is a high priority for me, I will provide a fix as soon as
> possible and hopefully a workaround.
>
> >
> > On Wed, Sep 4, 2019, 3:38 PM Alfredo Deza <adeza@xxxxxxxxxx> wrote:
> >>
> >> On Wed, Sep 4, 2019 at 4:01 PM Paul Emmerich <paul.emmerich@xxxxxxxx> wrote:
> >> >
> >> > Hi,
> >> >
> >> > see https://tracker.ceph.com/issues/41660
> >> >
> >> > ceph-volume lvm activate --all fails on the second OSD when stderr is
> >> > not a terminal.
> >> > Reproducible on different servers, so there's nothing weird about a
> >> > particular disk.
> >> >
> >> > Any idea where/how this is happening?
> >>
> >> That looks very odd, haven't seen it other than a unit test we have
> >> that fails in some machines. I was just investigating that today.
> >>
> >> Is it possible that the locale is set to something that is not
> >> en_US.UTF-8 ? I was able to replicate some failures with LC_ALL=C
> >>
> >> Another thing I would try is to enable debug (or show/paste the
> >> traceback) so that tracebacks are immediately available in the output:
> >>
> >> CEPH_VOLUME_DEBUG=1 ceph-volume lvm activate --all
> >>
> >> I'll follow up in the tracker ticket
> >> >
> >> > This makes 14.2.3 unusable for us as we need to re-activate all OSDs
> >> > after reboots because we don't have a persistent system disk.
> >> >
> >> >
> >> > Paul
> >> >
> >> > --
> >> > Paul Emmerich
> >> >
> >> > Looking for help with your Ceph cluster? Contact us at https://croit.io
> >> >
> >> > croit GmbH
> >> > Freseniusstr. 31h
> >> > 81247 München
> >> > www.croit.io
> >> > Tel: +49 89 1896585 90
> >> _______________________________________________
> >> Dev mailing list -- dev@xxxxxxx
> >> To unsubscribe send an email to dev-leave@xxxxxxx
_______________________________________________
Dev mailing list -- dev@xxxxxxx
To unsubscribe send an email to dev-leave@xxxxxxx

[Index of Archives]     [CEPH Users]     [Ceph Devel]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux