Hi ,
That makes sense.
How can I adjust the osd nearfull ratio ? I tried this, however it didnt change.
$ ceph tell mon.* injectargs "--mon_osd_nearfull_ratio .86"
mon.mon-a1: injectargs:mon_osd_nearfull_ratio = '0.860000' (not observed, change may require restart)
mon.mon-a2: injectargs:mon_osd_nearfull_ratio = '0.860000' (not observed, change may require restart)
mon.mon-a3: injectargs:mon_osd_nearfull_ratio = '0.860000' (not observed, change may require restart)
Karun Josy
On Tue, Dec 19, 2017 at 10:05 PM, Jean-Charles Lopez <jelopez@xxxxxxxxxx> wrote:
OK so it’s telling you that the near full OSD holds PGs for these three pools.JCOn Dec 19, 2017, at 08:05, Karun Josy <karunjosy1@xxxxxxxxx> wrote:No, I haven't.Interestingly, the POOL_NEARFULL flag is shown only when there is OSD_NEARFULL flag.I have recently upgraded to Luminous 12.2.2, haven't seen this flag in 12.2.1Karun JosyOn Tue, Dec 19, 2017 at 9:27 PM, Jean-Charles Lopez <jelopez@xxxxxxxxxx> wrote:Hidid you set quotas on these pools?See this page for explanation of most error messages: http://docs.ceph.com/docs/master/rados/operations/ health-checks/#pool-near-full JCOn Dec 19, 2017, at 01:48, Karun Josy <karunjosy1@xxxxxxxxx> wrote:______________________________Hello,In one of our clusters, health is showing these warnings :---------OSD_NEARFULL 1 nearfull osd(s)osd.22 is near fullPOOL_NEARFULL 3 pool(s) nearfullpool 'templates' is nearfullpool 'cvm' is nearfullpool 'ecpool' is nearfull------------One osd is above 85% used, which I know caused the OSD_Nearfull flag.But what does pool(s) nearfull mean ?And how can I correct it ?]$ ceph dfGLOBAL:SIZE AVAIL RAW USED %RAW USED31742G 11147G 20594G 64.88POOLS:NAME ID USED %USED MAX AVAIL OBJECTStemplates 5 196G 23.28 645G 50202cvm 6 6528 0 1076G 770ecpool 7 10260G 83.56 2018G 3004031Karun_________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com