Re: Ceph Cluster to OSD Utilization not in Sync

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thank You Dwyeni for the quick response. We have 2 Hammer which are due for upgrade to Luminous next month and 1 Luminous 12.2.8. Will try this on Luminous and if it works then will apply the same once the Hammer clusters are upgraded rather than adjusting the weights.

Thanks,
Pardhiv Karri

On Fri, Dec 21, 2018 at 1:05 PM Dyweni - Ceph-Users <6EXbab4FYk8H@xxxxxxxxxx> wrote:

Hi,


If you are running Ceph Luminous or later, use the Ceph Manager Daemon's Balancer module.  (http://docs.ceph.com/docs/luminous/mgr/balancer/).


Otherwise, tweak the OSD weights (not the OSD CRUSH weights) until you achieve uniformity.  (You should be able to get under 1 STDDEV).  I would adjust in small amounts to not overload your cluster.


Example:

ceph osd reweight osd.X  y.yyy




On 2018-12-21 14:56, Pardhiv Karri wrote:

Hi,
 
We have Ceph clusters which are greater than 1PB. We are using tree algorithm. The issue is with the data placement. If the cluster utilization percentage is at 65% then some of the OSDs are already above 87%. We had to change the near_full ratio to 0.90 to circumvent warnings and to get back the Health to OK state.
 
How can we keep the OSDs utilization to be in sync with cluster utilization (both percentages to be close enough) as we want to utilize the cluster to the max (above 80%) without unnecessarily adding too many nodes/osd's. Right now we are losing close to 400TB of the disk space unused as some OSDs are above 87% and some are below 50%. If the above 87% OSDs reach 95% then the cluster will have issues. What is the best way to mitigate this issue?
 
Thanks,
Pardhiv Karri



_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


--
Pardhiv Karri
"Rise and Rise again until LAMBS become LIONS" 


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux