Re: rebalancing ceph cluster

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 24/06/2019 11:25, jinguk.kwon@xxxxxxxxxxx wrote:
Hello everyone,

We have some osd on the ceph.
Some osd's usage is more than 77% and another osd's usage is 39% in the same host.

I wonder why osd’s usage is different.(Difference is large) and how can i fix it?

ID  CLASS   WEIGHT    REWEIGHT SIZE    USE     AVAIL   %USE  VAR  PGS TYPE NAME
 -2          93.26010        - 93.3TiB 52.3TiB 41.0TiB 56.04 0.98   -     host serverA
…...
 33 HDD  9.09511  1.00000 9.10TiB 3.55TiB 5.54TiB 39.08 0.68  66         osd.4
 45 HDD   7.27675  1.00000 7.28TiB 5.64TiB 1.64TiB 77.53 1.36  81         osd.7
…... 

-5          79.99017        - 80.0TiB 47.7TiB 32.3TiB 59.62 1.04   -     host serverB
  1 HDD   9.09511  1.00000 9.10TiB 4.79TiB 4.31TiB 52.63 0.92  87         osd.1
  6 HDD   9.09511  1.00000 9.10TiB 6.62TiB 2.48TiB 72.75 1.27  99         osd.6
 …...

Thank you

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


In some cases the root cause is if your pg count is small, the variations in pg count per osd will be higher, this is compounded more if your total osd count is small.

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux