Hi Monis,
The settings you mention do not prevent data movement to overloaded OSD's, they are a threshold when CEPH warns when an OSD is nearfull or backfillfull.
No expert on this but setting backfillfull lower then nearfull is not recommended, the nearfull state should be reached first in stead of backfillfull.
No expert on this but setting backfillfull lower then nearfull is not recommended, the nearfull state should be reached first in stead of backfillfull.
You can reweight the overloaded OSD's manually by issueing: ceph osd reweight osd.X 0.95 (the last value should be between 0 and 1, where 1 is the default and can be seen as 100%, setting this to 0.95 means to only use 95% of the OSD, to move more PGS of this OSD you can set the value lower to 0.9 or 0.85)
Kind regards,
Caspar
2018-04-18 9:07 GMT+02:00 Monis Monther <mmmm82@xxxxxxxxx>:
Hi,We are running a cluster with ceph luminous 12.2.0. Some of the OSDs are getting full and we are running ceph osd reweight-by-utilization to re-balance the OSDs. We have also setmon_osd_backfillfull_ratio 0.8 (This is to prevent moving data to an overloaded OSD when re-weighting)mon_osd_nearfull_ratio 0.85However, reweight is worsening the problem by moving data from an 85% full OSD to an 84.7 full OSD instead of moving it to half empty OSD. This is causing the last to increase up to 85.6. Some OSDs have now reached 87% and 86%Moreover, the cluster does not show any OSD as near full although some OSDs have passed 86% and is totaly ignoring the backfillfull setting by moving data to OSDs that are above 80%.Are the settings above wrong? what can we do to prevent moving data to overloaded OSDs--Best RegardsMonis
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph. com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com