Re: tunable question

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi, 

We have similar issues.
After upgradeing from hammer to jewel the tunable "choose leave stabel"
was introduces. If we activate it nearly all data will be moved. The
cluster has 2400 OSD on 40 nodes over two datacenters and is filled with
2,5 PB Data. 

We tried to enable it but the backfillingtraffic is to high to be
handled without impacting other services on the Network.

Do someone know if it is neccessary to enable this tunable? And could
it be a problem in the future if we want to upgrade to newer versions
wihout it enabled?

Regards,
Manuel Lausch

Am Thu, 28 Sep 2017 10:29:58 +0200
schrieb Dan van der Ster <dan@xxxxxxxxxxxxxx>:

> Hi,
> 
> How big is your cluster and what is your use case?
> 
> For us, we'll likely never enable the recent tunables that need to
> remap *all* PGs -- it would simply be too disruptive for marginal
> benefit.
> 
> Cheers, Dan
> 
> 
> On Thu, Sep 28, 2017 at 9:21 AM, mj <lists@xxxxxxxxxxxxx> wrote:
> > Hi,
> >
> > We have completed the upgrade to jewel, and we set tunables to
> > hammer. Cluster again HEALTH_OK. :-)
> >
> > But now, we would like to proceed in the direction of luminous and
> > bluestore OSDs, and we would like to ask for some feedback first.
> >
> > From the jewel ceph docs on tubables: "Changing tunable to
> > "optimal" on an existing cluster will result in a very large amount
> > of data movement as almost every PG mapping is likely to change."
> >
> > Given the above, and the fact that we would like to proceed to
> > luminous/bluestore in the not too far away future: What is cleverer:
> >
> > 1 - keep the cluster at tunable hammer now, upgrade to luminous in
> > a little while, change OSDs to bluestore, and then set tunables to
> > optimal
> >
> > or
> >
> > 2 - set tunable to optimal now, take the impact of "almost all PG
> > remapping", and when that is finished, upgrade to luminous,
> > bluestore etc.
> >
> > Which route is the preferred one?
> >
> > Or is there a third (or fourth?) option..? :-)
> >
> > MJ
> > _______________________________________________
> > ceph-users mailing list
> > ceph-users@xxxxxxxxxxxxxx
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



-- 
Manuel Lausch

Systemadministrator
Cloud Services

1&1 Mail & Media Development & Technology GmbH | Brauerstraße 48 |
76135 Karlsruhe | Germany Phone: +49 721 91374-1847
E-Mail: manuel.lausch@xxxxxxxx | Web: www.1und1.de

Amtsgericht Montabaur, HRB 5452

Geschäftsführer: Thomas Ludwig, Jan Oetjen


Member of United Internet

Diese E-Mail kann vertrauliche und/oder gesetzlich geschützte
Informationen enthalten. Wenn Sie nicht der bestimmungsgemäße Adressat
sind oder diese E-Mail irrtümlich erhalten haben, unterrichten Sie
bitte den Absender und vernichten Sie diese E-Mail. Anderen als dem
bestimmungsgemäßen Adressaten ist untersagt, diese E-Mail zu speichern,
weiterzuleiten oder ihren Inhalt auf welche Weise auch immer zu
verwenden.

This e-mail may contain confidential and/or privileged information. If
you are not the intended recipient of this e-mail, you are hereby
notified that saving, distribution or use of the content of this e-mail
in any way is prohibited. If you have received this e-mail in error,
please notify the sender and delete the e-mail.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux