Re: "optimal" tunables on release upgrade

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Matthew,

my colleagues and I can still remember that the values do not change automatically when you upgrade. I remember performance problems after an upgrade with old tunables a few years ago.

But such behaviour may change with the next version.

Meanwhile you get a warning in ceph status if they are not set correctly.

https://docs.ceph.com/en/latest/rados/operations/health-checks/#old-crush-tunables


Regards, Joachim


___________________________________
Clyso GmbH - Ceph Foundation Member
support@xxxxxxxxx
https://www.clyso.com

Am 26.02.2021 um 15:52 schrieb Matthew Vernon:
Hi,

Having been slightly caught out by tunables on my Octopus upgrade[0], can I just check that if I do
ceph osd crush tunables optimal

That will update the tunables on the cluster to the current "optimal" values (and move a lot of data around), but that this doesn't mean they'll change next time I upgrade the cluster or anything like that?

It's not quite clear from the documentation whether the next time "optimal" tunables change that'll be applied to a cluster where I've set tunables thus, or if tunables are only ever changed by a fresh invocation of ceph osd crush tunables...

[I assume the same answer applies to "default"?]

Regards,

Matthew

[0] I foolishly thought a cluster initially installed as Jewel would have jewel tunables


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux