Re: POOL_NEARFULL

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Karun,

 You can check how much data each OSD has with "ceph osd df"

ID CLASS WEIGHT  REWEIGHT SIZE   USE    AVAIL  %USE  VAR  PGS
 1   hdd       1.84000  1.00000     1885G   769G  1115G  40.84   0.97  101
 3   hdd       4.64000  1.00000     4679G   2613G 2065G 55.86 1.33 275
 4   hdd       4.64000  1.00000     4674G   1914G 2759G 40.96 0.97 193
 5   hdd       4.64000  1.00000     4668G   1434G 3234G 30.72 0.73 148
 8   hdd       1.84000  1.00000     1874G   742G  1131G 39.61 0.94  74
 0   hdd       4.64000  1.00000     4668G   2331G 2337G 49.94 1.19 268
 2   hdd       1.84000  1.00000     4668G   868G  3800G 18.60 0.44  99
 6   hdd       4.64000  1.00000     4668G   2580G 2087G 55.28 1.32 275
 7   hdd       1.84000  1.00000    1874G    888G   985G 47.43 1.13 107
                    TOTAL 33661G 14144G 19516G 42.02
MIN/MAX VAR: 0.44/1.33  STDDEV: 11.27

 The "%USE" column shows how much space is used on each OSD. You may
need to change the weight of some of the OSDs so the data balances out
correctly with "ceph osd crush reweight osd.N W".Change the N to the
number of OSD and W to the new weight.

 As you can see from above even though the weight on my 4.6TB is the
same for all of them, they have different %USE. So I could lower the
weight of the OSDs with more data, and Ceph will balance the cluster.

 I am not too sure why this happens.

http://lists.ceph.com/pipermail/ceph-users-ceph.com/2016-March/008623.html

Cary
-Dynamic

On Tue, Dec 19, 2017 at 3:57 PM, Jean-Charles Lopez <jelopez@xxxxxxxxxx> wrote:
> Hi
>
> did you set quotas on these pools?
>
> See this page for explanation of most error messages:
> http://docs.ceph.com/docs/master/rados/operations/health-checks/#pool-near-full
>
> JC
>
> On Dec 19, 2017, at 01:48, Karun Josy <karunjosy1@xxxxxxxxx> wrote:
>
> Hello,
>
> In one of our clusters, health is showing these warnings :
> ---------
> OSD_NEARFULL 1 nearfull osd(s)
>     osd.22 is near full
> POOL_NEARFULL 3 pool(s) nearfull
>     pool 'templates' is nearfull
>     pool 'cvm' is nearfull
>     pool 'ecpool' is nearfull
> ------------
>
> One osd is above 85% used, which I know caused the OSD_Nearfull flag.
> But what does pool(s) nearfull mean ?
> And how can I correct it ?
>
> ]$ ceph df
> GLOBAL:
>     SIZE       AVAIL      RAW USED     %RAW USED
>     31742G     11147G       20594G         64.88
> POOLS:
>     NAME            ID     USED       %USED     MAX AVAIL     OBJECTS
>     templates      5        196G     23.28          645G       50202
>     cvm               6        6528         0         1076G         770
>     ecpool           7      10260G     83.56         2018G     3004031
>
>
>
> Karun
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux