Re: Balancer=on with crush-compat mode

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On Sat, 5 Jan 2019, 13:38 Marc Roos <M.Roos@xxxxxxxxxxxxxxxxx wrote:

I have straw2, balancer=on, crush-compat and it gives worst spread over
my ssd drives (4 only) being used by only 2 pools. One of these pools
has pg 8. Should I increase this to 16 to create a better result, or
will it never be any better.

For now I like to stick to crush-compat, so I can use a default centos7
kernel.

Pg upmap is supported in the CentOS 7.5+ kernels 

Luminous 12.2.8, 3.10.0-862.14.4.el7.x86_64, CentOS Linux release
7.5.1804 (Core)



[@c01 ~]# cat balancer-1-before.txt | egrep '^19|^20|^21|^30'
19   ssd 0.48000  1.00000  447GiB  164GiB  283GiB 36.79 0.93  31
20   ssd 0.48000  1.00000  447GiB  136GiB  311GiB 30.49 0.77  32
21   ssd 0.48000  1.00000  447GiB  215GiB  232GiB 48.02 1.22  30
30   ssd 0.48000  1.00000  447GiB  151GiB  296GiB 33.72 0.86  27

[@c01 ~]# ceph osd df | egrep '^19|^20|^21|^30'
19   ssd 0.48000  1.00000  447GiB  157GiB  290GiB 35.18 0.87  30
20   ssd 0.48000  1.00000  447GiB  125GiB  322GiB 28.00 0.69  30
21   ssd 0.48000  1.00000  447GiB  245GiB  202GiB 54.71 1.35  30
30   ssd 0.48000  1.00000  447GiB  217GiB  230GiB 48.46 1.20  30

[@c01 ~]# ceph osd pool ls detail | egrep 'fs_meta|rbd.ssd'
pool 19 'fs_meta' replicated size 3 min_size 2 crush_rule 5 object_hash
rjenkins pg_num 16 pgp_num 16 last_change 22425 lfor 0/9035 flags
hashpspool stripe_width 0 application cephfs
pool 54 'rbd.ssd' replicated size 3 min_size 2 crush_rule 5 object_hash
rjenkins pg_num 8 pgp_num 8 last_change 24666 flags hashpspool
stripe_width 0 application rbd

[@c01 ~]# ceph df |egrep 'ssd|fs_meta'
    fs_meta                           19      170MiB      0.07       
240GiB     2451382
    fs_data.ssd                       33          0B         0       
240GiB           0
    rbd.ssd                           54      266GiB     52.57       
240GiB       75902
    fs_data.ec21.ssd                  55          0B         0       
480GiB           0

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux