OSD space imbalance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

I'm having an issue where disk usages between OSDs aren't well balanced
thus causing disk space to be wasted. Ceph is latest 0.94.2, used
exclusively through cephfs. Re-weighting helps, but just slightly, and
it has to be done on a daily basis causing constant refills. In the end
I get OSD with 65% usage with some other going over 90%. I also set the
"ceph osd crush tunables optimal", but I didn't notice any changes when
it comes to disk usage. Is there anything I can do to get them within
10% range at least?

     health HEALTH_OK
     mdsmap e2577: 1/1/1 up, 2 up:standby
     osdmap e25239: 48 osds: 48 up, 48 in
      pgmap v3188836: 5184 pgs, 3 pools, 18028 GB data, 6385 kobjects
            36156 GB used, 9472 GB / 45629 GB avail
                5184 active+clean


ID WEIGHT  REWEIGHT SIZE   USE    AVAIL   %USE  VAR
37 0.92999  1.00000   950G   625G    324G 65.85 0.83
21 0.92999  1.00000   950G   649G    300G 68.35 0.86
32 0.92999  1.00000   950G   670G    279G 70.58 0.89
 7 0.92999  1.00000   950G   676G    274G 71.11 0.90
17 0.92999  1.00000   950G   681G    268G 71.73 0.91
40 0.92999  1.00000   950G   689G    260G 72.55 0.92
20 0.92999  1.00000   950G   690G    260G 72.62 0.92
25 0.92999  1.00000   950G   691G    258G 72.76 0.92
 2 0.92999  1.00000   950G   694G    256G 73.03 0.92
39 0.92999  1.00000   950G   697G    253G 73.35 0.93
18 0.92999  1.00000   950G   703G    247G 74.00 0.93
47 0.92999  1.00000   950G   703G    246G 74.05 0.93
23 0.92999  0.86693   950G   704G    245G 74.14 0.94
 6 0.92999  1.00000   950G   726G    224G 76.39 0.96
 8 0.92999  1.00000   950G   727G    223G 76.54 0.97
 5 0.92999  1.00000   950G   728G    222G 76.62 0.97
35 0.92999  1.00000   950G   728G    221G 76.66 0.97
11 0.92999  1.00000   950G   730G    220G 76.82 0.97
43 0.92999  1.00000   950G   730G    219G 76.87 0.97
33 0.92999  1.00000   950G   734G    215G 77.31 0.98
38 0.92999  1.00000   950G   736G    214G 77.49 0.98
12 0.92999  1.00000   950G   737G    212G 77.61 0.98
31 0.92999  0.85184   950G   742G    208G 78.09 0.99
28 0.92999  1.00000   950G   745G    205G 78.41 0.99
27 0.92999  1.00000   950G   751G    199G 79.04 1.00
10 0.92999  1.00000   950G   754G    195G 79.40 1.00
13 0.92999  1.00000   950G   762G    188G 80.21 1.01
 9 0.92999  1.00000   950G   763G    187G 80.29 1.01
16 0.92999  1.00000   950G   764G    186G 80.37 1.01
 0 0.92999  1.00000   950G   778G    171G 81.94 1.03
 3 0.92999  1.00000   950G   780G    170G 82.11 1.04
41 0.92999  1.00000   950G   780G    169G 82.13 1.04
34 0.92999  0.87303   950G   783G    167G 82.43 1.04
14 0.92999  1.00000   950G   784G    165G 82.56 1.04
42 0.92999  1.00000   950G   786G    164G 82.70 1.04
46 0.92999  1.00000   950G   788G    162G 82.93 1.05
30 0.92999  1.00000   950G   790G    160G 83.12 1.05
45 0.92999  1.00000   950G   804G    146G 84.59 1.07
44 0.92999  1.00000   950G   807G    143G 84.92 1.07
 1 0.92999  1.00000   950G   817G    132G 86.05 1.09
22 0.92999  1.00000   950G   825G    125G 86.81 1.10
15 0.92999  1.00000   950G   826G    123G 86.97 1.10
19 0.92999  1.00000   950G   829G    120G 87.30 1.10
36 0.92999  1.00000   950G   831G    119G 87.48 1.10
24 0.92999  1.00000   950G   831G    118G 87.50 1.10
26 0.92999  1.00000   950G   851G 101692M 89.55 1.13
29 0.92999  1.00000   950G   851G 101341M 89.59 1.13
 4 0.92999  1.00000   950G   860G  92164M 90.53 1.14
MIN/MAX VAR: 0.83/1.14  STDDEV: 5.94
              TOTAL 45629G 36156G   9473G 79.24

Thanks,
Vedran


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux