Ceph balancer history and clarity

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




- if my cluster is not well balanced, I have to run the balancer execute 
several times, because it only optimises in small steps?

- is there some history of applied plans to see how optimizing brings 
down this reported final score 0.054781?

- how can I get the current score?

- I have some 8TB, 4TB (majority) and 3TB drives, should I keep 
crush-compat or move to upmap?

- What would be a good MIN/MAX and STDDEV of ceph osd df?


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux