Re: Help rebalancing OSD usage, Luminus 1.2.2

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



2018-01-30 17:24 GMT+01:00 Bryan Banister <bbanister@xxxxxxxxxxxxxxx>:

Hi all,

 

We are still very new to running a Ceph cluster and have run a RGW cluster for a while now (6-ish mo), it mainly holds large DB backups (Write once, read once, delete after N days).  The system is now warning us about an OSD that is near_full and so we went to look at the usage across OSDs.  We are somewhat surprised at how imbalanced the usage is across the OSDs, with the lowest usage at 22% full, the highest at nearly 90%, and an almost linear usage pattern across the OSDs (though it looks to step in roughly 5% increments):

 

[root@carf-ceph-osd01 ~]# ceph osd df | sort -nk8

ID  CLASS WEIGHT  REWEIGHT SIZE  USE   AVAIL %USE  VAR  PGS

77   hdd 7.27730  1.00000 7451G 1718G 5733G 23.06 0.43  32

73   hdd 7.27730  1.00000 7451G 1719G 5732G 23.08 0.43  31


I noticed that the PGs (the last column there, which counts PGs per OSD I gather) was kind of even,
so perhaps the objects that get into the PGs are very unbalanced in size?

But yes, using reweight to compensate for this should work for you.

ceph osd test-reweight-by-utilization 


should be worth testing.

--
May the most significant bit of your life be positive.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux