Re: Balancer=on with crush-compat mode

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Den sön 6 jan. 2019 kl 13:22 skrev Marc Roos <M.Roos@xxxxxxxxxxxxxxxxx>:
>
>  >If I understand the balancer correct, it balances PGs not data.
>  >This worked perfectly fine in your case.
>  >
>  >I prefer a PG count of ~100 per OSD, you are at 30. Maybe it would
>  >help to bump the PGs.
>  >

> I am not sure if I should increase from 8 to 16. Because that would just
>
> half the data in the pg's and they probably end up on the same osd's in
> the same ratio as now? Or am I assuming this incorrectly?
> Is 16 adviced?
>

If you had only one PG (the very extremest usecase) it would always be optimally
misplaced. If you have lots, there are many more chances of ceph spreading them
correctly. There is some hashing and pseudorandom in there to screw it
up at times,
but considering what you can do with few -vs- many PGs, having many allows for
better spread than few, upto some point where the cpu handling all PGs eats more
resources than its worth.

-- 
May the most significant bit of your life be positive.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux