Try:
ceph pg set_nearfull_ratio 0.9
On 26 Jul 2016 08:16, "Goncalo Borges" <goncalo.borges@xxxxxxxxxxxxx> wrote:
Hello...
I do not think that these settings are working properly in jewel. Maybe someone else can confirm.
So, to summarize:
1./ I've restarted mon and osd services (systemctl restart ceph.target) after setting
# grep nearfull /etc/ceph/ceph.conf2./ Thos configs seems active in the daemons configurations
mon osd nearfull ratio = 0.90
# ceph --admin-daemon /var/run/ceph/ceph-mon.rccephmon1.asok config show |grep mon_osd_nearfull_ratio
"mon_osd_nearfull_ratio": "0.9",
[
# ceph daemon mon.rccephmon1 config show | grep mon_osd_nearfull_ratio
"mon_osd_nearfull_ratio": "0.9",
3./ However, I still receive a warning of near full osds if they are above 85%
4./ A ceph pg dump does show:
# ceph pg dump
dumped all in format plain
version 12415999
stamp 2016-07-26 07:15:29.018848
last_osdmap_epoch 2546
last_pg_scan 2546
full_ratio 0.95
nearfull_ratio 0.85
Cheers
G.
On 07/26/2016 12:39 PM, Brad Hubbard wrote:
On Tue, Jul 26, 2016 at 12:16:35PM +1000, Goncalo Borges wrote:Hi Brad Thanks for replying. Answers inline.I am a bit confused about the 'unchachable' message we get in Jewel 10.2.2 when I try to change some cluster configs. For example: 1./ if I try to change mon_osd_nearfull_ratio from 0.85 to 0.90, I get # ceph tell mon.* injectargs "--mon_osd_nearfull_ratio 0.90" mon.rccephmon1: injectargs:mon_osd_nearfull_ratio = '0.9' (unchangeable) mon.rccephmon3: injectargs:mon_osd_nearfull_ratio = '0.9' (unchangeable) mon.rccephmon2: injectargs:mon_osd_nearfull_ratio = '0.9' (unchangeable)This is telling you that this variable has no observers (i.e. nothing monitors it dynamically) so changing it at runtime has no effect. IOW it is read at start-up and not referred to again after that IIUC.but the 0.85 default values continues to be showed in ceph --show-config --conf /dev/null | grep mon_osd_nearfull_ratio mon_osd_nearfull_ratio = 0.85Try something like the following. $ ceph daemon mon.a config show|grep mon_osd_nearfull_ratioand I continue to have health warnings regarding near full osds.So the actual config value has been changed but has no affect and will not persist. IOW, this value needs to be modified in the conf file and the daemon restarted.2./ If I change in the ceph.conf and restart services, I get the same behaviour as in 1./ However, if I check the daemon configuration, I see:Please clarify what you mean by "the same behaviour"?So, in my ceph.conf I've set 'mon osd nearfull ratio = 0.90' and restarted mon and osd (not sure if those were needed) daemons everywhere. After restarting, I am still getting the health warnings regarding near full osds above 85%. If the new value was active, I should not get such warnings.# ceph daemon mon.rccephmon2 config show | grep mon_osd_nearfull_ratio "mon_osd_nearfull_ratio": "0.9",Use the daemon command I showed above.Isn't it the same as you suggested? That was run after restarting servicesYes, it is. I assumed wrongly that you were using the "--show-config" command again here.so it is still unclear to me why the new value is not picked up and why running 'ceph --show-config --conf /dev/null | grep mon_osd_nearfull_ratio'That command shows the default ceph config, try something like this. $ ceph -n mon.rccephmon2 --show-config|grep mon_osd_nearfull_ratiostill shows 0.85 Maybe a restart if services is not what has to be done but a stop/start instead?You can certainly try it but I would have thought a restart would involve stop/start of the MON daemon. This thread includes additional information that may be relevant to you atm. http://permalink.gmane.org/gmane.comp.file-systems.ceph.devel/23391Cheers Goncalo
-- Goncalo Borges Research Computing ARC Centre of Excellence for Particle Physics at the Terascale School of Physics A28 | University of Sydney, NSW 2006 T: +61 2 93511937
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com