Re: crush map has straw_calc_version=0 and legacy tunables on luminous

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



[1] Here is a really cool set of slides from Ceph Day Berlin where Dan van der Ster uses the mgr balancer module with upmap to gradually change the tunables of a cluster without causing major client impact.  The down side for you is that upmap requires all luminous or newer clients, but if you upgrade your kernel clients to 1.13+, then you can enable upmap in the cluster and utilize the balancer module to upgrade your cluster tunables.  As stated [2] here that those kernel versions still report as Jewel clients, but only because they are missing some non-essential luminous client features even they they are fully compatible with the upmap features, and other required features.

As a side note to the balancer manager in upmap mode, it balances your cluster in such a way that it attempts to evenly distribute all PGs for a pool evenly across all OSDs.  So if you have 3 different pools, the PGs for those pools should each be within 1 or 2 PG totals on every OSD in your cluster... it's really cool.  The slides discuss how to get your cluster to that point as well, incase you have modified your weights or reweights at all.


[1] https://www.slideshare.net/Inktank_Ceph/ceph-day-berlin-mastering-ceph-operations-upmap-and-the-mgr-balancer
[2] http://lists.ceph.com/pipermail/ceph-users-ceph.com/2018-November/031206.html

On Mon, Feb 4, 2019 at 6:31 PM Shain Miley <SMiley@xxxxxxx> wrote:

For future reference I found these 2 links which answer most of the questions:

http://docs.ceph.com/docs/master/rados/operations/crush-map/

https://www.openstack.org/assets/presentation-media/Advanced-Tuning-and-Operation-guide-for-Block-Storage-using-Ceph-Boston-2017-final.pdf

 

We have about 250TB (x3) in our cluster so I am leaning toward not changing things at this point because it sounds like there will be a significant amount of data movement involved for not a lot in return.

 

If anyone knows of a strong reason I should change the tunables profile away from what I have…then please let me know so I don’t end up running the cluster in a sub-optimal state for no reason.

 

Thanks,

Shain

 

--

Shain Miley | Manager of Systems and Infrastructure, Digital Media | smiley@xxxxxxx | 202.513.3649

 

From: ceph-users <ceph-users-bounces@xxxxxxxxxxxxxx> on behalf of Shain Miley <SMiley@xxxxxxx>
Date: Monday, February 4, 2019 at 3:03 PM
To: "ceph-users@xxxxxxxxxxxxxx" <ceph-users@xxxxxxxxxxxxxx>
Subject: crush map has straw_calc_version=0 and legacy tunables on luminous

 

Hello,

I just upgraded our cluster to 12.2.11 and I have a few questions around straw_calc_version and tunables.

Currently ceph status shows the following:

crush map has straw_calc_version=0

crush map has legacy tunables (require argonaut, min is firefly)

 

  1. Will setting tunables to optimal also change the staw_calc_version or do I need to set that separately?


  2. Right now I have a set of rbd kernel clients connecting using kernel version 4.4.  The ‘ceph daemon mon.id sessions’ command shows that this client is still connecting using the hammer feature set (and a few others on jewel as well):

    "MonSession(client.113933130 10.35.100.121:0/3425045489 is open allow *, features 0x7fddff8ee8cbffb (jewel))",  “MonSession(client.112250505 10.35.100.99:0/4174610322 is open allow *, features 0x106b84a842a42 (hammer))",

    My question is what is the minimum kernel version I would need to upgrade the 4.4 kernel server to in order to get to jewel or luminous?

 

  1. Will setting the tunables to optimal on luminous prevent jewel and hammer clients from connecting?  I want to make sure I don’t do anything will prevent my existing clients from connecting to the cluster.



Thanks in advance,

Shain

 

--

Shain Miley | Manager of Systems and Infrastructure, Digital Media | smiley@xxxxxxx | 202.513.3649

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux