Re: Is the 12.2.1 really stable? Anybody have production cluster with Luminous Bluestore?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Nov 22, 2017 at 8:29 AM, magicboiz@xxxxxxxxx
<magicboiz@xxxxxxxxx> wrote:
> Hi
>
> We have a Ceph Jewel cluster running, but in our Lab environment, when we
> try to upgrade to 12.2.0, we are facing a problem with cephx/auth and MGR.
>
> See this bugs:
>
> - http://tracker.ceph.com/issues/22096
> -
> http://lists.ceph.com/pipermail/ceph-users-ceph.com/2017-August/020396.html

The issue has come up multiple times in ceph-user list, check tracker
http://tracker.ceph.com/issues/20950
Its fixed/verified in 12.2.2 but not in 12.2.1,  12.2.2 is not
released yet and is still in backports state.
A workaround is also discussed here for now:
https://www.spinics.net/lists/ceph-devel/msg37911.html

>
>
> Thanks.
> J.
>
>
>
> On 16/11/17 15:14, Konstantin Shalygin wrote:
>>
>> Hi cephers.
>> Some thoughts...
>> At this time my cluster on Kraken 11.2.0 - works smooth with FileStore and
>> RBD only.
>> I want upgrade to Luminous 12.2.1 and go to Bluestore because this cluster
>> want grows double with new disks, so is best opportunity migrate to
>> Bluestore.
>>
>> In ML I was found two problems:
>> 1. Increased memory usage, should be fixed in upstream
>> (http://lists.ceph.com/pipermail/ceph-users-ceph.com/2017-October/021676.html).
>> 2. OSD drops and goes cluster offline
>> (http://lists.ceph.com/pipermail/ceph-users-ceph.com/2017-November/022494.html).
>> Don't know this Bluestore or FileStore OSD'.s.
>>
>> If the first case I can safely survive - hosts has enough memory to go to
>> Bluestore and with the growing I can wait until the next stable release.
>> That second case really scares me. As I understood clusters with this
>> problem for now not in production.
>>
>> By this point I have completed all the preparations for the update and now
>> I need to figure out whether I should update to 12.2.1 or wait for the next
>> stable release, because my cluster is in production and I can't fail. Or I
>> can upgrade and use FileStore until next release, this is acceptable for me.
>>
>> Thanks.
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@xxxxxxxxxxxxxx
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux