Hi Craig, An uneven primaries distribution was indeed my first thought. I should have been more explicit on the percentages of the histograms I gave, lets see them in detail in a more comprehensive way. On a 27938 bench objects seen by osdmap, the hosts are distributed like that : 20904 host1 21210 host2 20835 host3 20709 host3 That's the number of time they appear (as primary or secondary or tertiary). The distribution is pretty linear, as we don't have more than 0.5% of total objects difference between the most and the less used host. If we now considere the primary host distribution, here is what we have : 7207 host1 6960 host2 6814 host3 6957 host3 That's the number of time each host appears as primary. Once again, the distribution is correct with less than 1.5% of the total entries between the most and the less used host as primary. I must add that such a distribution is of course observed for the secondary and the tertiary copy. I think we have enough samples to confirms the correct distribution of the crush function. Each host having 25% of chance to be primary, this shouldn't be the reason why we observe a higher CPU load. There's must something else.... I must add we run 0.87.1 Giant. Go to a firefly release is an option as the phenomena is not currently observed on comparable hardware platforms running 0.80.x About the memory on hosts, 32GB is just a beginning for the tests. We'll add more later. Frederic Craig Lewis <clewis@xxxxxxxxxxxxxxxxxx> a écrit le 20/03/15 23:19 :
|
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com