Re: Is the 12.2.1 really stable? Anybody have production cluster with Luminous Bluestore?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



We upgraded from firefly to 12.2.1 . We cannot use our RadosGW S3 Endpoints anymore since multipart uploads get not replicated. So we are also waiting for 12.2.2 to finally allow usage of our s3 endpoints again....

On Thu, Nov 16, 2017 at 3:33 PM, Ashley Merrick <ashley@xxxxxxxxxxxxxx> wrote:
Currently experiencing a nasty bug http://tracker.ceph.com/issues/21142

I would say wait a while for the next point release.

,Ashley

-----Original Message-----
From: ceph-users [mailto:ceph-users-bounces@lists.ceph.com] On Behalf Of Jack
Sent: 16 November 2017 22:22
To: ceph-users@xxxxxxxxxxxxxx
Subject: Re: Is the 12.2.1 really stable? Anybody have production cluster with Luminous Bluestore?

My cluster (55 OSDs) runs 12.2.x since the release, and bluestore too All good so far

On 16/11/2017 15:14, Konstantin Shalygin wrote:
> Hi cephers.
> Some thoughts...
> At this time my cluster on Kraken 11.2.0 - works smooth with FileStore
> and RBD only.
> I want upgrade to Luminous 12.2.1 and go to Bluestore because this
> cluster want grows double with new disks, so is best opportunity
> migrate to Bluestore.
>
> In ML I was found two problems:
> 1. Increased memory usage, should be fixed in upstream
> (http://lists.ceph.com/pipermail/ceph-users-ceph.com/2017-October/021676.html).
>
> 2. OSD drops and goes cluster offline
> (http://lists.ceph.com/pipermail/ceph-users-ceph.com/2017-November/022494.html).
> Don't know this Bluestore or FileStore OSD'.s.
>
> If the first case I can safely survive - hosts has enough memory to go
> to Bluestore and with the growing I can wait until the next stable release.
> That second case really scares me. As I understood clusters with this
> problem for now not in production.
>
> By this point I have completed all the preparations for the update and
> now I need to figure out whether I should update to 12.2.1 or wait for
> the next stable release, because my cluster is in production and I
> can't fail. Or I can upgrade and use FileStore until next release,
> this is acceptable for me.
>
> Thanks.
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



--

Enrico Kern
Lead System Engineer

T +49 (0) 30 555713017  | +49 (0)152 26814501
E  enrico.kern@xxxxxxxxxx |  Skype flyersa |  LinkedIn View my Profile 



Glispa GmbH - Berlin Office
Sonnenburger Str. 73 10437 Berlin, Germany 
Managing Director: Dina Karol-Gavish, Registered in Berlin, AG Charlottenburg HRB 114678B
            
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux