Re: Balancing cluster with large disks - 10TB HHD

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

On mik, 2018-12-26 at 13:14 +0100, jesper@xxxxxxxx wrote:
> Thanks for the insight and links.
> 
> > As I can see you are on Luminous. Since Luminous Balancer plugin is
> > available [1], you should use it instead reweight's in place, especially
> > in upmap mode [2]
> 
> I'll try it out again - last I tried it complanied about older clients -
> it should be better now.
> 
require_min_compat_client luminous is required, for you to take advantage of
upmap.

> > Also, may be I can catch another crush mistakes, can I see `ceph osd
> > crush show-tunables, `ceph osd crush rule dump`, `ceph osd pool ls
> > detail`?
> 
> Here:
> $ sudo ceph osd crush show-tunables
> {
>     "choose_local_tries": 0,
>     "choose_local_fallback_tries": 0,
>     "choose_total_tries": 50,
>     "chooseleaf_descend_once": 1,
>     "chooseleaf_vary_r": 1,
>     "chooseleaf_stable": 0,
>     "straw_calc_version": 1,
>     "allowed_bucket_algs": 54,
>     "profile": "hammer",
>     "optimal_tunables": 0,
>     "legacy_tunables": 0,
>     "minimum_required_version": "hammer",
>     "require_feature_tunables": 1,
>     "require_feature_tunables2": 1,
>     "has_v2_rules": 1,
>     "require_feature_tunables3": 1,
>     "has_v3_rules": 0,
>     "has_v4_buckets": 1,
>     "require_feature_tunables5": 0,
>     "has_v5_rules": 0
> }
> 
> $ sudo ceph osd crush rule dump
> [
>     {
>         "rule_id": 0,
>         "rule_name": "replicated_ruleset_hdd",
>         "ruleset": 0,
>         "type": 1,
>         "min_size": 1,
>         "max_size": 10,
>         "steps": [
>             {
>                 "op": "take",
>                 "item": -1,
>                 "item_name": "default~hdd"
>             },
>             {
>                 "op": "chooseleaf_firstn",
>                 "num": 0,
>                 "type": "host"
>             },
>             {
>                 "op": "emit"
>             }
>         ]
>     },
>     {
>         "rule_id": 1,
>         "rule_name": "replicated_ruleset_hdd_fast",
>         "ruleset": 1,
>         "type": 1,
>         "min_size": 1,
>         "max_size": 10,
>         "steps": [
>             {
>                 "op": "take",
>                 "item": -28,
>                 "item_name": "default~hdd_fast"
>             },
>             {
>                 "op": "chooseleaf_firstn",
>                 "num": 0,
>                 "type": "host"
>             },
>             {
>                 "op": "emit"
>             }
>         ]
>     },
>     {
>         "rule_id": 2,
>         "rule_name": "replicated_ruleset_ssd",
>         "ruleset": 2,
>         "type": 1,
>         "min_size": 1,
>         "max_size": 10,
>         "steps": [
>             {
>                 "op": "take",
>                 "item": -21,
>                 "item_name": "default~ssd"
>             },
>             {
>                 "op": "chooseleaf_firstn",
>                 "num": 0,
>                 "type": "host"
>             },
>             {
>                 "op": "emit"
>             }
>         ]
>     },
>     {
>         "rule_id": 3,
>         "rule_name": "cephfs_data_ec42",
>         "ruleset": 3,
>         "type": 3,
>         "min_size": 3,
>         "max_size": 6,
>         "steps": [
>             {
>                 "op": "set_chooseleaf_tries",
>                 "num": 5
>             },
>             {
>                 "op": "set_choose_tries",
>                 "num": 100
>             },
>             {
>                 "op": "take",
>                 "item": -1,
>                 "item_name": "default~hdd"
>             },
>             {
>                 "op": "chooseleaf_indep",
>                 "num": 0,
>                 "type": "host"
>             },
>             {
>                 "op": "emit"
>             }
>         ]
>     }
> ]
> 
> $ sudo ceph osd pool ls detail
> pool 6 'kube' replicated size 3 min_size 2 crush_rule 0 object_hash
> rjenkins pg_num 128 pgp_num 128 last_change 41045 flags hashpspool
> stripe_width 0 application rbd
>         removed_snaps [1~3]
> pool 15 'default.rgw.buckets.data' replicated size 3 min_size 2 crush_rule
> 0 object_hash rjenkins pg_num 256 pgp_num 256 last_change 41045 flags
> hashpspool stripe_width 0 application rgw
> pool 17 'default.rgw.users.keys' replicated size 3 min_size 2 crush_rule 0
> object_hash rjenkins pg_num 16 pgp_num 16 last_change 41045 lfor 0/36590
> flags hashpspool stripe_width 0 application rgw
> pool 18 'default.rgw.buckets.non-ec' replicated size 3 min_size 2
> crush_rule 0 object_hash rjenkins pg_num 16 pgp_num 16 last_change 41045
> lfor 0/36595 flags hashpspool stripe_width 0 application rgw
> pool 19 'default.rgw.users.uid' replicated size 3 min_size 2 crush_rule 0
> object_hash rjenkins pg_num 16 pgp_num 16 last_change 41045 lfor 0/36608
> flags hashpspool stripe_width 0 application rgw
> pool 20 'rbd' replicated size 3 min_size 2 crush_rule 0 object_hash
> rjenkins pg_num 128 pgp_num 128 last_change 41045 flags hashpspool
> stripe_width 0 application rbd
> pool 26 'default.rgw.data.root' replicated size 3 min_size 2 crush_rule 0
> object_hash rjenkins pg_num 8 pgp_num 8 last_change 41045 flags hashpspool
> stripe_width 0 application rgw
> pool 27 'default.rgw.log' replicated size 3 min_size 2 crush_rule 0
> object_hash rjenkins pg_num 8 pgp_num 8 last_change 41045 flags hashpspool
> stripe_width 0 application rgw
> pool 28 'default.rgw.control' replicated size 3 min_size 2 crush_rule 0
> object_hash rjenkins pg_num 8 pgp_num 8 last_change 41045 flags hashpspool
> stripe_width 0 application rgw
> pool 29 'default.rgw.gc' replicated size 3 min_size 2 crush_rule 0
> object_hash rjenkins pg_num 8 pgp_num 8 last_change 41045 flags hashpspool
> stripe_width 0 application rgw
> pool 30 '.rgw.root' replicated size 3 min_size 2 crush_rule 0 object_hash
> rjenkins pg_num 8 pgp_num 8 last_change 41045 flags hashpspool
> stripe_width 0 application rgw
> pool 31 'default.rgw.buckets.index' replicated size 3 min_size 2
> crush_rule 0 object_hash rjenkins pg_num 32 pgp_num 32 last_change 41045
> flags hashpspool stripe_width 0 application rgw
> pool 32 'cephfs_data' replicated size 3 min_size 2 crush_rule 0
> object_hash rjenkins pg_num 2048 pgp_num 2048 last_change 45215 lfor
> 0/45204 flags hashpspool stripe_width 0 application cephfs
> pool 33 'cephfs_metadata' replicated size 3 min_size 2 crush_rule 2
> object_hash rjenkins pg_num 256 pgp_num 256 last_change 34756 flags
> hashpspool stripe_width 0 application cephfs
> pool 34 'default.rgw.usage' replicated size 3 min_size 2 crush_rule 0
> object_hash rjenkins pg_num 16 pgp_num 16 last_change 41045 lfor 0/36615
> flags hashpspool stripe_width 0 application rgw
> pool 44 'cephfs_data_ec42' erasure size 6 min_size 4 crush_rule 3
> object_hash rjenkins pg_num 1024 pgp_num 1024 last_change 43464 lfor
> 0/43453 flags hashpspool,ec_overwrites stripe_width 16384 application
> cephfs
> 
> 
> 
> 
> 
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEElZWfRQVsNukQFi9Ko80MCbT/An0FAlwjfHgACgkQo80MCbT/
An0MmBAA4SVEEKq9sNZrkWFJIN3dn0QXwjpnJlbTeoA3heAnNb/tH/ayvrV1rfqO
UYSmzppjm+ZQ9JrR+H8h53R6vkcsVA1yuudR49+wSUmrweR+bNWaA0HUJae23rgs
qN2gA/jaMyrgvmPAg5/EDYSw6vfZBMlacKayBdr4KWV6f3RPPhQeD/tXgJ3zGngn
Q3tgRzzVE3hQ+GlqfRnmQjMQrGX0D0L0XPyKE9+YdFQnYhccVW7k58lOElTccBUC
HOj5Y3dq4SE4HyvASRWgwKdzkbW/PqRH1TKqpbRw7P+6ucjGLUYsNle1aUyoRHom
jPs3u4cFir21VtBWFO++uMyE0avjtAeVuFHt0UgPppEkJOhLJ+nnupQrEAMDVY0X
wMBhCZYNfPHh/DnCSQ2FI9Txk4rwrJnecXetDY3g2CuX12H4+vM5Fud1B5ZoDA0J
8XO6e8NMAI/9Vk+Nhe6FThUV5DMK40asg19kB65z6lT3SgnYjhEDZj5F9fX53P7a
N3rWT5pWPWXM4MmCqpjKR7kMJ1/mJezWlD2ILx4wsUHjRAIQ2OHzdPCxdt4YtKbn
tZNJEeTblHm1Vyn8DekNnAowxy2xAo4AEPgHBTLYkwolbWbnH3x6kvZsyWQxKIsq
usf2SAJD2OUf/QjRFTaQbaKMoj+m5qbQR1p9ssYqw4WCccG7Qt0=
=sCzk
-----END PGP SIGNATURE-----

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux