Re: Balancer=on with crush-compat mode

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I am having with the change from pg 8 to pg 16

[@c01 ceph]# ceph osd df | egrep '^ID|^19|^20|^21|^30'
ID CLASS WEIGHT  REWEIGHT SIZE    USE     AVAIL   %USE  VAR  PGS
19   ssd 0.48000  1.00000  447GiB  161GiB  286GiB 35.91 0.84  35
20   ssd 0.48000  1.00000  447GiB  170GiB  277GiB 38.09 0.89  36
21   ssd 0.48000  1.00000  447GiB  215GiB  232GiB 48.08 1.12  36
30   ssd 0.48000  1.00000  447GiB  220GiB  227GiB 49.18 1.15  37

Coming from

[@c01 ~]# ceph osd df | egrep '^19|^20|^21|^30'
19   ssd 0.48000  1.00000  447GiB  157GiB  290GiB 35.18 0.87  30
20   ssd 0.48000  1.00000  447GiB  125GiB  322GiB 28.00 0.69  30
21   ssd 0.48000  1.00000  447GiB  245GiB  202GiB 54.71 1.35  30
30   ssd 0.48000  1.00000  447GiB  217GiB  230GiB 48.46 1.20  30

I guess I should know more about the technology behind this, to 
appreciate this result. 
(I guess the pgs stay this way until more data is added when
having "Error EDOM: Unable to find further optimization, ...")

 >>
 >>  >If I understand the balancer correct, it balances PGs not data.
 >>  >This worked perfectly fine in your case.
 >>  >
 >>  >I prefer a PG count of ~100 per OSD, you are at 30. Maybe it would
 >>  >help to bump the PGs.
 >>  >
 >
 >> I am not sure if I should increase from 8 to 16. Because that would 
just
 >>
 >> half the data in the pg's and they probably end up on the same osd's 
in
 >> the same ratio as now? Or am I assuming this incorrectly?
 >> Is 16 adviced?
 >>
 >
 >If you had only one PG (the very extremest usecase) it would always be 
optimally
 >misplaced. If you have lots, there are many more chances of ceph 
spreading them
 >correctly. There is some hashing and pseudorandom in there to screw it
 >up at times,
 >but considering what you can do with few -vs- many PGs, having many 
allows for
 >better spread than few, upto some point where the cpu handling all PGs 
eats more
 >resources than its worth.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux