Re: Octopus - unbalanced OSDs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



This should help:

ceph config set mgr mgr/balancer/upmap_max_deviation 1

On Mon, Apr 19, 2021 at 10:17 AM Ml Ml <mliebherr99@xxxxxxxxxxxxxx> wrote:
>
> Anyone an idea? :)
>
> On Fri, Apr 16, 2021 at 3:09 PM Ml Ml <mliebherr99@xxxxxxxxxxxxxx> wrote:
> >
> > Hello List,
> >
> > any ideas why my OSDs are that unbalanced ?
> >
> > root@ceph01:~# ceph -s
> >   cluster:
> >     id:     5436dd5d-83d4-4dc8-a93b-60ab5db145df
> >     health: HEALTH_WARN
> >             1 nearfull osd(s)
> >             4 pool(s) nearfull
> >
> >   services:
> >     mon: 3 daemons, quorum ceph03,ceph01,ceph02 (age 2w)
> >     mgr: ceph03(active, since 4M), standbys: ceph02.jwvivm
> >     mds: backup:1 {0=backup.ceph06.hdjehi=up:active} 3 up:standby
> >     osd: 56 osds: 56 up (since 29h), 56 in (since 3d)
> >
> >   task status:
> >     scrub status:
> >         mds.backup.ceph06.hdjehi: idle
> >
> >   data:
> >     pools:   4 pools, 1185 pgs
> >     objects: 24.29M objects, 44 TiB
> >     usage:   151 TiB used, 55 TiB / 206 TiB avail
> >     pgs:     675 active+clean
> >              476 active+clean+snaptrim_wait
> >              30  active+clean+snaptrim
> >              4   active+clean+scrubbing+deep
> >
> > root@ceph01:~# ceph osd df tree
> > ID   CLASS  WEIGHT     REWEIGHT  SIZE     RAW USE  DATA     OMAP
> > META     AVAIL    %USE   VAR   PGS  STATUS  TYPE NAME
> >  -1         206.79979         -  206 TiB  151 TiB  151 TiB   36 GiB
> > 503 GiB   55 TiB  73.23  1.00    -          root default
> >  -2          28.89995         -   29 TiB   20 TiB   20 TiB  5.5 GiB
> > 74 GiB  8.9 TiB  69.19  0.94    -              host ceph01
> >   0    hdd    2.70000   1.00000  2.7 TiB  1.8 TiB  1.8 TiB  590 MiB
> > 6.9 GiB  908 GiB  66.81  0.91   44      up          osd.0
> >   1    hdd    2.70000   1.00000  2.7 TiB  1.6 TiB  1.6 TiB  411 MiB
> > 6.5 GiB  1.1 TiB  60.43  0.83   39      up          osd.1
> >   4    hdd    2.70000   1.00000  2.7 TiB  1.8 TiB  1.8 TiB  501 MiB
> > 6.8 GiB  898 GiB  67.15  0.92   43      up          osd.4
> >   8    hdd    2.70000   1.00000  2.7 TiB  2.0 TiB  2.0 TiB  453 MiB
> > 7.0 GiB  700 GiB  74.39  1.02   47      up          osd.8
> >  11    hdd    1.70000   1.00000  1.7 TiB  1.3 TiB  1.3 TiB  356 MiB
> > 5.6 GiB  433 GiB  75.39  1.03   31      up          osd.11
> >  12    hdd    2.70000   1.00000  2.7 TiB  2.1 TiB  2.1 TiB  471 MiB
> > 7.0 GiB  591 GiB  78.40  1.07   48      up          osd.12
> >  14    hdd    2.70000   1.00000  2.7 TiB  1.6 TiB  1.6 TiB  448 MiB
> > 6.0 GiB  1.1 TiB  59.68  0.82   38      up          osd.14
> >  18    hdd    2.70000   1.00000  2.7 TiB  1.7 TiB  1.7 TiB  515 MiB
> > 6.2 GiB  980 GiB  64.15  0.88   41      up          osd.18
> >  22    hdd    1.70000   1.00000  1.7 TiB  1.2 TiB  1.2 TiB  360 MiB
> > 4.2 GiB  491 GiB  72.06  0.98   29      up          osd.22
> >  30    hdd    1.70000   1.00000  1.7 TiB  1.2 TiB  1.2 TiB  366 MiB
> > 4.7 GiB  558 GiB  68.26  0.93   28      up          osd.30
> >  33    hdd    1.59999   1.00000  1.6 TiB  1.2 TiB  1.2 TiB  406 MiB
> > 4.9 GiB  427 GiB  74.28  1.01   29      up          osd.33
> >  64    hdd    3.29999   1.00000  3.3 TiB  2.4 TiB  2.4 TiB  736 MiB
> > 8.6 GiB  915 GiB  73.22  1.00   60      up          osd.64
> >  -3          29.69995         -   30 TiB   22 TiB   22 TiB  5.4 GiB
> > 81 GiB  7.9 TiB  73.20  1.00    -              host ceph02
> >   2    hdd    1.70000   1.00000  1.7 TiB  1.3 TiB  1.2 TiB  402 MiB
> > 5.2 GiB  476 GiB  72.93  1.00   30      up          osd.2
> >   3    hdd    2.70000   1.00000  2.7 TiB  2.0 TiB  2.0 TiB  653 MiB
> > 7.8 GiB  652 GiB  76.15  1.04   49      up          osd.3
> >   7    hdd    2.70000   1.00000  2.7 TiB  2.5 TiB  2.5 TiB  456 MiB
> > 7.7 GiB  209 GiB  92.36  1.26   56      up          osd.7
> >   9    hdd    2.70000   1.00000  2.7 TiB  1.9 TiB  1.9 TiB  434 MiB
> > 7.2 GiB  781 GiB  71.46  0.98   46      up          osd.9
> >  13    hdd    2.39999   1.00000  2.4 TiB  1.6 TiB  1.6 TiB  451 MiB
> > 6.1 GiB  823 GiB  66.28  0.91   38      up          osd.13
> >  16    hdd    2.70000   1.00000  2.7 TiB  1.6 TiB  1.6 TiB  375 MiB
> > 6.4 GiB  1.1 TiB  59.84  0.82   39      up          osd.16
> >  19    hdd    1.70000   1.00000  1.7 TiB  1.1 TiB  1.1 TiB  323 MiB
> > 4.7 GiB  601 GiB  65.80  0.90   27      up          osd.19
> >  23    hdd    2.70000   1.00000  2.7 TiB  2.2 TiB  2.2 TiB  471 MiB
> > 7.7 GiB  520 GiB  80.99  1.11   50      up          osd.23
> >  24    hdd    1.70000   1.00000  1.7 TiB  1.4 TiB  1.4 TiB  371 MiB
> > 5.5 GiB  273 GiB  84.44  1.15   32      up          osd.24
> >  28    hdd    2.70000   1.00000  2.7 TiB  1.9 TiB  1.9 TiB  428 MiB
> > 7.4 GiB  818 GiB  70.07  0.96   44      up          osd.28
> >  31    hdd    2.70000   1.00000  2.7 TiB  2.0 TiB  2.0 TiB  516 MiB
> > 7.4 GiB  660 GiB  75.85  1.04   48      up          osd.31
> >  32    hdd    3.29999   1.00000  3.3 TiB  2.2 TiB  2.2 TiB  661 MiB
> > 7.9 GiB  1.2 TiB  64.86  0.89   52      up          osd.32
> >  -4          26.29996         -   26 TiB   18 TiB   18 TiB  4.3 GiB
> > 73 GiB  8.0 TiB  69.58  0.95    -              host ceph03
> >   5    hdd    1.70000   1.00000  1.7 TiB  1.2 TiB  1.2 TiB  298 MiB
> > 5.2 GiB  541 GiB  69.21  0.95   29      up          osd.5
> >   6    hdd    1.70000   1.00000  1.7 TiB  1.0 TiB  1.0 TiB  321 MiB
> > 4.4 GiB  697 GiB  60.34  0.82   25      up          osd.6
> >  10    hdd    2.70000   1.00000  2.7 TiB  1.9 TiB  1.9 TiB  431 MiB
> > 7.5 GiB  796 GiB  70.89  0.97   46      up          osd.10
> >  15    hdd    2.70000   1.00000  2.7 TiB  1.9 TiB  1.9 TiB  500 MiB
> > 6.6 GiB  805 GiB  70.55  0.96   44      up          osd.15
> >  17    hdd    1.59999   1.00000  1.6 TiB  1.1 TiB  1.1 TiB  377 MiB
> > 4.9 GiB  530 GiB  68.05  0.93   27      up          osd.17
> >  20    hdd    1.70000   1.00000  1.7 TiB  1.0 TiB  1.0 TiB  223 MiB
> > 4.7 GiB  685 GiB  61.03  0.83   25      up          osd.20
> >  21    hdd    2.70000   1.00000  2.7 TiB  1.7 TiB  1.7 TiB  392 MiB
> > 6.7 GiB  951 GiB  65.23  0.89   42      up          osd.21
> >  25    hdd    1.70000   1.00000  1.7 TiB  1.1 TiB  1.1 TiB  157 MiB
> > 5.1 GiB  601 GiB  65.83  0.90   27      up          osd.25
> >  26    hdd    2.70000   1.00000  2.7 TiB  2.1 TiB  2.1 TiB  512 MiB
> > 7.6 GiB  573 GiB  79.06  1.08   50      up          osd.26
> >  27    hdd    2.70000   1.00000  2.7 TiB  1.9 TiB  1.9 TiB  473 MiB
> > 7.6 GiB  805 GiB  70.55  0.96   46      up          osd.27
> >  29    hdd    2.70000   1.00000  2.7 TiB  2.1 TiB  2.1 TiB  478 MiB
> > 7.3 GiB  539 GiB  80.29  1.10   50      up          osd.29
> >  63    hdd    1.70000   1.00000  1.7 TiB  1.1 TiB  1.1 TiB  195 MiB
> > 5.1 GiB  646 GiB  63.23  0.86   26      up          osd.63
> > -11          24.79999         -   25 TiB   18 TiB   18 TiB  4.1 GiB
> > 59 GiB  6.3 TiB  74.51  1.02    -              host ceph04
> >  34    hdd    5.20000   1.00000  5.2 TiB  3.9 TiB  3.8 TiB  954 MiB
> > 13 GiB  1.4 TiB  73.48  1.00   94      up          osd.34
> >  42    hdd    5.20000   1.00000  5.2 TiB  3.9 TiB  3.8 TiB  841 MiB
> > 13 GiB  1.4 TiB  73.43  1.00   94      up          osd.42
> >  44    hdd    7.20000   1.00000  7.2 TiB  5.5 TiB  5.5 TiB  1.2 GiB
> > 17 GiB  1.6 TiB  77.54  1.06  133      up          osd.44
> >  45    hdd    7.20000   1.00000  7.2 TiB  5.2 TiB  5.2 TiB  1.2 GiB
> > 16 GiB  1.9 TiB  73.03  1.00  125      up          osd.45
> > -13          30.09998         -   30 TiB   22 TiB   22 TiB  5.1 GiB
> > 72 GiB  8.0 TiB  73.48  1.00    -              host ceph05
> >  39    hdd    7.20000   1.00000  7.2 TiB  5.6 TiB  5.6 TiB  1.4 GiB
> > 17 GiB  1.6 TiB  77.89  1.06  126      up          osd.39
> >  40    hdd    7.20000   1.00000  7.2 TiB  5.3 TiB  5.3 TiB  1.2 GiB
> > 17 GiB  1.9 TiB  73.87  1.01  124      up          osd.40
> >  41    hdd    7.20000   1.00000  7.2 TiB  5.4 TiB  5.3 TiB  1.1 GiB
> > 17 GiB  1.8 TiB  74.92  1.02  128      up          osd.41
> >  43    hdd    5.20000   1.00000  5.2 TiB  3.7 TiB  3.7 TiB  853 MiB
> > 13 GiB  1.5 TiB  71.28  0.97   91      up          osd.43
> >  60    hdd    3.29999   1.00000  3.3 TiB  2.1 TiB  2.1 TiB  573 MiB
> > 7.9 GiB  1.2 TiB  63.53  0.87   52      up          osd.60
> >  -9          17.59999         -   18 TiB   12 TiB   12 TiB  3.0 GiB
> > 40 GiB  5.3 TiB  70.10  0.96    -              host ceph06
> >  35    hdd    7.20000   1.00000  7.2 TiB  5.2 TiB  5.2 TiB  1.3 GiB
> > 16 GiB  1.9 TiB  72.80  0.99  125      up          osd.35
> >  36    hdd    5.20000   1.00000  5.2 TiB  3.5 TiB  3.5 TiB  804 MiB
> > 12 GiB  1.8 TiB  66.50  0.91   85      up          osd.36
> >  38    hdd    5.20000   1.00000  5.2 TiB  3.7 TiB  3.7 TiB  978 MiB
> > 12 GiB  1.6 TiB  70.02  0.96   88      up          osd.38
> > -15          24.89998         -   25 TiB   18 TiB   18 TiB  4.3 GiB
> > 58 GiB  6.5 TiB  73.75  1.01    -              host ceph07
> >  66    hdd    7.20000   1.00000  7.2 TiB  5.3 TiB  5.3 TiB  1.1 GiB
> > 17 GiB  1.8 TiB  74.74  1.02  126      up          osd.66
> >  67    hdd    7.20000   1.00000  7.2 TiB  5.3 TiB  5.3 TiB  1.2 GiB
> > 17 GiB  1.8 TiB  74.51  1.02  121      up          osd.67
> >  68    hdd    3.29999   1.00000  3.3 TiB  2.3 TiB  2.3 TiB  720 MiB
> > 7.9 GiB  1.0 TiB  68.63  0.94   55      up          osd.68
> >  69    hdd    7.20000   1.00000  7.2 TiB  5.3 TiB  5.3 TiB  1.2 GiB
> > 17 GiB  1.8 TiB  74.40  1.02  129      up          osd.69
> > -17          24.50000         -   24 TiB   20 TiB   20 TiB  4.1 GiB
> > 47 GiB  4.4 TiB  82.08  1.12    -              host ceph08
> >  37    hdd    9.50000   1.00000  9.5 TiB  7.8 TiB  7.7 TiB  1.5 GiB
> > 18 GiB  1.8 TiB  81.39  1.11  166      up          osd.37
> >  46    hdd    5.00000   1.00000  5.0 TiB  4.0 TiB  3.9 TiB  889 MiB
> > 9.3 GiB  1.0 TiB  79.67  1.09   87      up          osd.46
> >  47    hdd    5.00000   1.00000  5.0 TiB  4.2 TiB  4.2 TiB  863 MiB
> > 9.7 GiB  817 GiB  83.90  1.15   90      up          osd.47
> >  48    hdd    5.00000   1.00000  5.0 TiB  4.2 TiB  4.2 TiB  969 MiB
> > 10 GiB  813 GiB  83.99  1.15   91      up          osd.48
> >                           TOTAL  206 TiB  151 TiB  151 TiB   36 GiB
> > 503 GiB   55 TiB  73.23
> > MIN/MAX VAR: 0.82/1.26  STDDEV: 7.00
> >
> >
> > root@ceph01:~# ceph balancer status
> > {
> >     "active": true,
> >     "last_optimize_duration": "0:00:00.016174",
> >     "last_optimize_started": "Fri Apr 16 12:54:47 2021",
> >     "mode": "upmap",
> >     "optimize_result": "Unable to find further optimization, or
> > pool(s) pg_num is decreasing, or distribution is already perfect",
> >     "plans": []
> > }
> >
> >
> > root@ceph01:~# ceph versions
> > {
> >     "mon": {
> >         "ceph version 15.2.5
> > (2c93eff00150f0cc5f106a559557a58d3d7b6f1f) octopus (stable)": 3
> >     },
> >     "mgr": {
> >         "ceph version 15.2.5
> > (2c93eff00150f0cc5f106a559557a58d3d7b6f1f) octopus (stable)": 2
> >     },
> >     "osd": {
> >         "ceph version 15.2.5
> > (2c93eff00150f0cc5f106a559557a58d3d7b6f1f) octopus (stable)": 56
> >     },
> >     "mds": {
> >         "ceph version 15.2.5
> > (2c93eff00150f0cc5f106a559557a58d3d7b6f1f) octopus (stable)": 4
> >     },
> >     "overall": {
> >         "ceph version 15.2.5
> > (2c93eff00150f0cc5f106a559557a58d3d7b6f1f) octopus (stable)": 65
> >     }
> > }
> >
> > root@ceph01:~# ceph osd crush rule ls
> > replicated_ruleset
> >
> > root@ceph01:~# ceph osd crush rule dump replicated_ruleset
> > {
> >     "rule_id": 0,
> >     "rule_name": "replicated_ruleset",
> >     "ruleset": 0,
> >     "type": 1,
> >     "min_size": 1,
> >     "max_size": 10,
> >     "steps": [
> >         {
> >             "op": "take",
> >             "item": -1,
> >             "item_name": "default"
> >         },
> >         {
> >             "op": "chooseleaf_firstn",
> >             "num": 0,
> >             "type": "host"
> >         },
> >         {
> >             "op": "emit"
> >         }
> >     ]
> > }
> >
> >
> >
> > Cheers,
> > Michael
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux