Hi all,
I'm looking for the best way to merge/remap existing host buckets into one.
I'm running a Ceph Nautilus cluster used as a Ceph Cinder backend with 2
pools "volume-service" and "volume-recherche" both with dedicated OSDs :
|host cccephnd00x-service {||
|| id -2 # do not change unnecessarily||
|| alg straw2||
|| hash 0 # rjenkins1||
|| item osd.0 weight 7.275||
|| item osd.6 weight 7.275||
||}||
||host cccephnd00x-recherche {||
|| id -3 # do not change unnecessarily||
|| alg straw2||
|| hash 0 # rjenkins1||
|| item osd.11 weight 7.266||
|| item osd.17 weight 7.266||
|| item osd.22 weight 7.266||
|| item osd.27 weight 7.266||
||}
...
||root service {||
|| id -26 # do not change unnecessarily||
|| alg straw2||
|| hash 0 # rjenkins1||
|| item cccephnd001-service weight 14.550||
|| ...||
|| item cccephnd006-service weight 14.550||
||
||}||
||root recherche {||
|| id -27 # do not change unnecessarily||
|| alg straw2||
|| hash 0 # rjenkins1||
|| item cccephnd001-recherche weight 29.064||
|| ...||
|| item cccephnd006-recherche weight 29.064||
||}||
||rule HAService {||
|| id 1||
|| type replicated||
|| min_size 1||
|| max_size 10||
|| step take service||
|| step chooseleaf firstn 0 type host||
|| step emit||
||}||
||rule Recherche {||
|| id 2||
|| type replicated||
|| min_size 1||
|| max_size 10||
|| step take recherche||
|| step chooseleaf firstn 0 type host||
|| step emit||
||}
$ ceph osd pool get volumes-service crush_rule
crush_rule: HAService
$ ceph osd pool get volumes-recherche crush_rule
crush_rule: Recherche
|
The pool "volume-service" used to work with SSD cache tiering but we
decided to stop using it.
So I would like to keep these 2 pools but merge the buckets /host
cccephnd00X-service/ and /host cccephnd00X-recherche/ into one
/cccephnd00X-cinder/ for better performance (more OSDs assigned to pools).
In theory, migrate to this kind of crushmap:
|host cccephnd00x-cinder {||
|| id -22 ||
|| alg straw2||
|| hash 0 # rjenkins1||
|| item osd.0 weight 7.275||
|| item osd.6 weight 7.275|||
|| item osd.11 weight 7.266||
|| item osd.17 weight 7.266||
|| item osd.22 weight 7.266||
|| item osd.27 weight 7.266||
| ||}||
|...
||root cinder {||
|| id -23 ||
|| alg straw2||
|| hash 0 # rjenkins1||
|| item cccephnd001-|||||cinder| weight 29.064||
|| ...||
|| item cccephnd006-|||||cinder| weight 29.064||
||}||
||rule Cinder {||
|| id 24||
|| type replicated||
|| min_size 1||
|| max_size 10||
|| step take ||||||cinder||||
|| step chooseleaf firstn 0 type host||
|| step emit||
||}
|||$ ceph osd pool get volumes-service crush_rule
crush_rule: ||||||cinder||||
|| $ ceph osd pool get volumes-recherche crush_rule
crush_rule: ||||||cinder||||
||
|In practice, I'm thinking about creating the new host bucket/root/rule
and then change the crush_rule used for the 2 pools to the new one. And
then delete the old one.
Do you think that's can be done easily? (And without losing existing data?)
I guess there will be a huge rebalance activity but I don't have many
choice.
Or do you have any other suggestions?
Cheers,
Adrien
||||
||| |||
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx