Hi Dan, list,
Our cluster is small: three nodes, totally 24 4Tb platter OSDs, SSD
journals. Using rbd for VMs. That's it. Runs nicely though :-)
The fact that "tunable optimal" for jewel would result in "significantly
fewer mappings change when an OSD is marked out of the cluster" is what
attracts us.
Reasoning behind it: upgrading to "optimal" NOW, should result in faster
rebuild-time when disaster strikes, and we're all stressed out. :-)
After the jewel upgrade, we also upgraded the tunables from "(require
bobtail, min is firefly)" to "hammer". This resulted in approx 24 hours
rebuild, but actually without significant inpact on the hosted VMs.
Is it safe to assume that setting it to "optimal" would have a similar
impact, or are the implications bigger?
MJ
On 09/28/2017 10:29 AM, Dan van der Ster wrote:
Hi,
How big is your cluster and what is your use case?
For us, we'll likely never enable the recent tunables that need to
remap *all* PGs -- it would simply be too disruptive for marginal
benefit.
Cheers, Dan
On Thu, Sep 28, 2017 at 9:21 AM, mj <lists@xxxxxxxxxxxxx> wrote:
Hi,
We have completed the upgrade to jewel, and we set tunables to hammer.
Cluster again HEALTH_OK. :-)
But now, we would like to proceed in the direction of luminous and bluestore
OSDs, and we would like to ask for some feedback first.
From the jewel ceph docs on tubables: "Changing tunable to "optimal" on an
existing cluster will result in a very large amount of data movement as
almost every PG mapping is likely to change."
Given the above, and the fact that we would like to proceed to
luminous/bluestore in the not too far away future: What is cleverer:
1 - keep the cluster at tunable hammer now, upgrade to luminous in a little
while, change OSDs to bluestore, and then set tunables to optimal
or
2 - set tunable to optimal now, take the impact of "almost all PG
remapping", and when that is finished, upgrade to luminous, bluestore etc.
Which route is the preferred one?
Or is there a third (or fourth?) option..? :-)
MJ
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com