Re: How to properly deal with NEAR FULL OSD

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



It would be helpful to see your crush map (there are some tunables that help with this issue as well available if you're not running ancient versions).
However, distribution uniformity isn't that great really.
It helps to increase the number of PGs, but beware that there's no turning back.

Other than that, play with reweights (and possibly crush weights) regularly - that's what we do...

Jan


On 18 Feb 2016, at 01:11, Vlad Blando <vblando@xxxxxxxxxxxxx> wrote:

Hi This been bugging me for some time now, the distribution of data on the OSD is not balanced so some OSD are near full, i did ceph osd reweight-by-utilization but it not helping much.


[root@controller-node ~]# ceph osd tree
# id    weight  type name       up/down reweight
-1      98.28   root default
-2      32.76           host ceph-node-1
0       3.64                    osd.0   up      1
1       3.64                    osd.1   up      1
2       3.64                    osd.2   up      1
3       3.64                    osd.3   up      1
4       3.64                    osd.4   up      1
5       3.64                    osd.5   up      1
6       3.64                    osd.6   up      1
7       3.64                    osd.7   up      1
8       3.64                    osd.8   up      1
-3      32.76           host ceph-node-2
9       3.64                    osd.9   up      1
10      3.64                    osd.10  up      1
11      3.64                    osd.11  up      1
12      3.64                    osd.12  up      1
13      3.64                    osd.13  up      1
14      3.64                    osd.14  up      1
15      3.64                    osd.15  up      1
16      3.64                    osd.16  up      1
17      3.64                    osd.17  up      1
-4      32.76           host ceph-node-3
18      3.64                    osd.18  up      1
19      3.64                    osd.19  up      1
20      3.64                    osd.20  up      1
21      3.64                    osd.21  up      1
22      3.64                    osd.22  up      1
23      3.64                    osd.23  up      1
24      3.64                    osd.24  up      1
25      3.64                    osd.25  up      1
26      3.64                    osd.26  up      1
[root@controller-node ~]#


[root@controller-node ~]# /opt/df-osd.sh
ceph-node-1
=======================================================================
/dev/sdb1              3.7T  2.0T  1.7T  54% /var/lib/ceph/osd/ceph-0
/dev/sdc1              3.7T  2.7T  1.1T  72% /var/lib/ceph/osd/ceph-1
/dev/sdd1              3.7T  3.3T  431G  89% /var/lib/ceph/osd/ceph-2
/dev/sde1              3.7T  2.8T  879G  77% /var/lib/ceph/osd/ceph-3
/dev/sdf1              3.7T  3.3T  379G  90% /var/lib/ceph/osd/ceph-4
/dev/sdg1              3.7T  2.9T  762G  80% /var/lib/ceph/osd/ceph-5
/dev/sdh1              3.7T  3.0T  733G  81% /var/lib/ceph/osd/ceph-6
/dev/sdi1              3.7T  3.4T  284G  93% /var/lib/ceph/osd/ceph-7
/dev/sdj1              3.7T  3.4T  342G  91% /var/lib/ceph/osd/ceph-8
=======================================================================
ceph-node-2
=======================================================================
/dev/sdb1              3.7T  3.1T  622G  84% /var/lib/ceph/osd/ceph-9
/dev/sdc1              3.7T  2.7T  1.1T  72% /var/lib/ceph/osd/ceph-10
/dev/sdd1              3.7T  3.1T  557G  86% /var/lib/ceph/osd/ceph-11
/dev/sde1              3.7T  3.3T  392G  90% /var/lib/ceph/osd/ceph-12
/dev/sdf1              3.7T  2.6T  1.1T  72% /var/lib/ceph/osd/ceph-13
/dev/sdg1              3.7T  2.8T  879G  77% /var/lib/ceph/osd/ceph-14
/dev/sdh1              3.7T  2.7T  984G  74% /var/lib/ceph/osd/ceph-15
/dev/sdi1              3.7T  3.2T  463G  88% /var/lib/ceph/osd/ceph-16
/dev/sdj1              3.7T  3.1T  594G  85% /var/lib/ceph/osd/ceph-17
=======================================================================
ceph-node-3
=======================================================================
/dev/sdb1              3.7T  2.8T  910G  76% /var/lib/ceph/osd/ceph-18
/dev/sdc1              3.7T  2.7T 1012G  73% /var/lib/ceph/osd/ceph-19
/dev/sdd1              3.7T  3.2T  537G  86% /var/lib/ceph/osd/ceph-20
/dev/sde1              3.7T  3.2T  465G  88% /var/lib/ceph/osd/ceph-21
/dev/sdf1              3.7T  3.0T  663G  83% /var/lib/ceph/osd/ceph-22
/dev/sdg1              3.7T  3.4T  248G  94% /var/lib/ceph/osd/ceph-23
/dev/sdh1              3.7T  2.8T  928G  76% /var/lib/ceph/osd/ceph-24
/dev/sdi1              3.7T  2.9T  802G  79% /var/lib/ceph/osd/ceph-25
/dev/sdj1              3.7T  2.7T  1.1T  73% /var/lib/ceph/osd/ceph-26
=======================================================================
[root@controller-node ~]#


[root@controller-node ~]# ceph health detail
HEALTH_ERR 2 pgs inconsistent; 10 near full osd(s); 2 scrub errors
pg 5.7f is active+clean+inconsistent, acting [2,12,18]
pg 5.38 is active+clean+inconsistent, acting [7,9,24]
osd.2 is near full at 88%
osd.4 is near full at 89%
osd.7 is near full at 92%
osd.8 is near full at 90%
osd.11 is near full at 85%
osd.12 is near full at 89%
osd.16 is near full at 87%
osd.20 is near full at 85%
osd.21 is near full at 87%
osd.23 is near full at 93%
2 scrub errors
[root@controller-node ~]#

[root@controller-node ~]# ceph df
GLOBAL:
    SIZE        AVAIL      RAW USED     %RAW USED
    100553G     19796G     80757G       80.31
POOLS:
    NAME        ID     USED       %USED     OBJECTS
    images      4      8680G      8.63      1111395
    volumes     5      18192G     18.09     4675359
[root@controller-node ~]#


[root@controller-node ~]# ceph osd pool get images pg_num
pg_num: 1024
[root@controller-node ~]#
[root@controller-node ~]# ceph osd pool get volumes pg_num
pg_num: 300
[root@controller-node ~]#


Thanks.


/Vlad
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux