Re: Balancing PGs across OSDs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I have set
upmap_max_iterations 2
w/o any impact.

In my opinion the issue is that the evaluation of OSDs data load is not
working.
Or can you explain why osdmaptool does not report anything to do?

Regards
Thomas

Am 03.12.2019 um 08:26 schrieb Harald Staub:
> Hi all
>
> Something to try:
> ceph config set mgr mgr/balancer/upmap_max_iterations 20
>
> (Default is 100.)
>
> Cheers
>  Harry
>
> On 03.12.19 08:02, Lars Täuber wrote:
>> BTW: The osdmaptool doesn't see anything to do either:
>>
>> $ ceph osd getmap -o om
>> $ osdmaptool om --upmap /tmp/upmap.sh --upmap-pool cephfs_data
>> osdmaptool: osdmap file 'om'
>> writing upmap command output to: /tmp/upmap.sh
>> checking for upmap cleanups
>> upmap, max-count 100, max deviation 0.01
>>   limiting to pools cephfs_data (1)
>> no upmaps proposed
>>
>>
>>
>>
>> Tue, 3 Dec 2019 07:30:24 +0100
>> Lars Täuber <taeuber@xxxxxxx> ==> Konstantin Shalygin <k0ste@xxxxxxxx> :
>>> Hi Konstantin,
>>>
>>>
>>> Tue, 3 Dec 2019 10:01:34 +0700
>>> Konstantin Shalygin <k0ste@xxxxxxxx> ==> Lars Täuber
>>> <taeuber@xxxxxxx>, ceph-users@xxxxxxx :
>>>> Please paste your `ceph osd df tree`, `ceph osd pool ls detail`, `ceph
>>>> osd crush rule dump`.
>>>
>>> here it comes:
>>>
>>> $ ceph osd df tree
>>> ID  CLASS WEIGHT    REWEIGHT SIZE    RAW USE DATA    OMAP    META   
>>> AVAIL    %USE  VAR  PGS STATUS TYPE NAME
>>>   -1       195.40730        - 195 TiB 130 TiB 128 TiB  58 GiB 476
>>> GiB   66 TiB 66.45 1.00   -        root default
>>> -25       195.40730        - 195 TiB 130 TiB 128 TiB  58 GiB 476
>>> GiB   66 TiB 66.45 1.00   -            room PRZ
>>> -26       195.40730        - 195 TiB 130 TiB 128 TiB  58 GiB 476
>>> GiB   66 TiB 66.45 1.00   -                row rechts
>>> -27        83.74599        -  84 TiB  57 TiB  56 TiB  25 GiB 206
>>> GiB   27 TiB 67.51 1.02   -                    rack 1-eins
>>>   -3        27.91533        -  28 TiB  18 TiB  17 TiB 8.4 GiB  66
>>> GiB   10 TiB 62.80 0.95   -                        host onode1
>>>    0   hdd   5.51459  1.00000 5.5 TiB 3.4 TiB 3.4 TiB 3.8 MiB  14
>>> GiB  2.1 TiB 62.48 0.94 163     up                     osd.0
>>>    1   hdd   5.51459  1.00000 5.5 TiB 3.4 TiB 3.4 TiB 6.5 MiB  12
>>> GiB  2.1 TiB 62.47 0.94 163     up                     osd.1
>>>    2   hdd   5.51459  1.00000 5.5 TiB 3.4 TiB 3.4 TiB 7.1 MiB  12
>>> GiB  2.1 TiB 62.53 0.94 163     up                     osd.2
>>>    3   hdd   5.51459  1.00000 5.5 TiB 3.5 TiB 3.4 TiB 7.5 MiB  12
>>> GiB  2.0 TiB 62.90 0.95 164     up                     osd.3
>>>   37   hdd   5.51459  1.00000 5.5 TiB 3.7 TiB 3.7 TiB 6.4 MiB  13
>>> GiB  1.8 TiB 67.32 1.01 176     up                     osd.37
>>>    4   ssd   0.34239  1.00000 351 GiB  11 GiB 187 MiB 8.3 GiB 2.0
>>> GiB  340 GiB  3.01 0.05 110     up                     osd.4
>>> -13        27.91533        -  28 TiB  17 TiB  17 TiB 8.2 GiB  66
>>> GiB   10 TiB 62.64 0.94   -                        host onode4
>>>   13   hdd   5.51459  1.00000 5.5 TiB 3.4 TiB 3.4 TiB 4.0 MiB  13
>>> GiB  2.1 TiB 62.49 0.94 163     up                     osd.13
>>>   14   hdd   5.51459  1.00000 5.5 TiB 3.4 TiB 3.4 TiB 3.2 MiB  13
>>> GiB  2.1 TiB 62.49 0.94 163     up                     osd.14
>>>   15   hdd   5.51459  1.00000 5.5 TiB 3.4 TiB 3.4 TiB 4.4 MiB  12
>>> GiB  2.1 TiB 62.43 0.94 163     up                     osd.15
>>>   16   hdd   5.51459  1.00000 5.5 TiB 3.4 TiB 3.4 TiB 5.8 MiB  12
>>> GiB  2.1 TiB 62.13 0.94 162     up                     osd.16
>>>   40   hdd   5.51459  1.00000 5.5 TiB 3.7 TiB 3.7 TiB 3.6 MiB  13
>>> GiB  1.8 TiB 67.36 1.01 176     up                     osd.40
>>>   33   ssd   0.34239  1.00000 351 GiB  11 GiB 201 MiB 8.2 GiB 2.2
>>> GiB  340 GiB  3.02 0.05 110     up                     osd.33
>>> -22        27.91533        -  28 TiB  22 TiB  21 TiB 8.1 GiB  74
>>> GiB  6.4 TiB 77.10 1.16   -                        host onode7
>>>   25   hdd   5.51459  1.00000 5.5 TiB 4.3 TiB 4.2 TiB 7.2 MiB  14
>>> GiB  1.2 TiB 77.59 1.17 203     up                     osd.25
>>>   26   hdd   5.51459  1.00000 5.5 TiB 4.3 TiB 4.3 TiB 4.7 MiB  14
>>> GiB  1.2 TiB 78.40 1.18 205     up                     osd.26
>>>   27   hdd   5.51459  1.00000 5.5 TiB 4.2 TiB 4.1 TiB 3.8 MiB  14
>>> GiB  1.3 TiB 75.80 1.14 198     up                     osd.27
>>>   28   hdd   5.51459  1.00000 5.5 TiB 4.2 TiB 4.1 TiB 4.5 MiB  14
>>> GiB  1.3 TiB 76.13 1.15 199     up                     osd.28
>>>   30   hdd   5.51459  1.00000 5.5 TiB 4.5 TiB 4.5 TiB 8.2 MiB  15
>>> GiB 1006 GiB 82.18 1.24 215     up                     osd.30
>>>   36   ssd   0.34239  1.00000 351 GiB  10 GiB 184 MiB 8.1 GiB 2.0
>>> GiB  340 GiB  2.92 0.04 110     up                     osd.36
>>> -28        55.83066        -  56 TiB  35 TiB  34 TiB  17 GiB 132
>>> GiB   21 TiB 62.36 0.94   -                    rack 2-zwei
>>>   -7        27.91533        -  28 TiB  17 TiB  17 TiB 8.2 GiB  66
>>> GiB   11 TiB 62.27 0.94   -                        host onode2
>>>    5   hdd   5.51459  1.00000 5.5 TiB 3.4 TiB 3.4 TiB 4.0 MiB  12
>>> GiB  2.1 TiB 62.08 0.93 162     up                     osd.5
>>>    6   hdd   5.51459  1.00000 5.5 TiB 3.4 TiB 3.4 TiB 3.9 MiB  13
>>> GiB  2.1 TiB 62.13 0.93 162     up                     osd.6
>>>    7   hdd   5.51459  1.00000 5.5 TiB 3.4 TiB 3.3 TiB 3.7 MiB  12
>>> GiB  2.1 TiB 61.77 0.93 161     up                     osd.7
>>>    8   hdd   5.51459  1.00000 5.5 TiB 3.4 TiB 3.3 TiB 3.2 MiB  12
>>> GiB  2.1 TiB 61.75 0.93 161     up                     osd.8
>>>   38   hdd   5.51459  1.00000 5.5 TiB 3.7 TiB 3.7 TiB 3.7 MiB  14
>>> GiB  1.8 TiB 67.31 1.01 176     up                     osd.38
>>>   31   ssd   0.34239  1.00000 351 GiB  11 GiB 166 MiB 8.1 GiB 2.4
>>> GiB  340 GiB  3.04 0.05 110     up                     osd.31
>>> -16        27.91533        -  28 TiB  17 TiB  17 TiB 8.7 GiB  66
>>> GiB   10 TiB 62.44 0.94   -                        host onode5
>>>   17   hdd   5.51459  1.00000 5.5 TiB 3.4 TiB 3.4 TiB   4 MiB  12
>>> GiB  2.1 TiB 62.15 0.94 162     up                     osd.17
>>>   18   hdd   5.51459  1.00000 5.5 TiB 3.4 TiB 3.4 TiB 4.0 MiB  13
>>> GiB  2.1 TiB 62.16 0.94 162     up                     osd.18
>>>   19   hdd   5.51459  1.00000 5.5 TiB 3.4 TiB 3.4 TiB 4.5 MiB  13
>>> GiB  2.1 TiB 62.14 0.94 162     up                     osd.19
>>>   20   hdd   5.51459  1.00000 5.5 TiB 3.4 TiB 3.4 TiB 3.5 MiB  13
>>> GiB  2.1 TiB 62.12 0.93 162     up                     osd.20
>>>   41   hdd   5.51459  1.00000 5.5 TiB 3.7 TiB 3.7 TiB 3.9 MiB  14
>>> GiB  1.8 TiB 67.31 1.01 176     up                     osd.41
>>>   34   ssd   0.34239  1.00000 351 GiB  11 GiB 192 MiB 8.7 GiB 1.8
>>> GiB  340 GiB  3.04 0.05 109     up                     osd.34
>>> -29        55.83066        -  56 TiB  38 TiB  38 TiB  16 GiB 138
>>> GiB   17 TiB 68.95 1.04   -                    rack 3-drei
>>> -10        27.91533        -  28 TiB  17 TiB  17 TiB 8.1 GiB  63
>>> GiB   11 TiB 61.02 0.92   -                        host onode3
>>>    9   hdd   5.51459  1.00000 5.5 TiB 3.3 TiB 3.3 TiB 3.7 MiB  12
>>> GiB  2.2 TiB 60.63 0.91 158     up                     osd.9
>>>   10   hdd   5.51459  1.00000 5.5 TiB 3.3 TiB 3.3 TiB 3.1 MiB  12
>>> GiB  2.2 TiB 60.19 0.91 157     up                     osd.10
>>>   11   hdd   5.51459  1.00000 5.5 TiB 3.3 TiB 3.3 TiB 6.7 MiB  12
>>> GiB  2.2 TiB 60.27 0.91 157     up                     osd.11
>>>   12   hdd   5.51459  1.00000 5.5 TiB 3.3 TiB 3.3 TiB 4.1 MiB  12
>>> GiB  2.2 TiB 60.28 0.91 157     up                     osd.12
>>>   39   hdd   5.51459  1.00000 5.5 TiB 3.7 TiB 3.7 TiB 4.6 MiB  13
>>> GiB  1.8 TiB 67.34 1.01 176     up                     osd.39
>>>   32   ssd   0.34239  1.00000 351 GiB  10 GiB 271 MiB 8.1 GiB 1.8
>>> GiB  341 GiB  2.88 0.04 109     up                     osd.32
>>> -19        27.91533        -  28 TiB  21 TiB  21 TiB 8.1 GiB  74
>>> GiB  6.5 TiB 76.89 1.16   -                        host onode6
>>>   21   hdd   5.51459  1.00000 5.5 TiB 4.0 TiB 4.0 TiB 6.2 MiB  13
>>> GiB  1.5 TiB 72.79 1.10 190     up                     osd.21
>>>   22   hdd   5.51459  1.00000 5.5 TiB 4.5 TiB 4.5 TiB 5.1 MiB  16
>>> GiB  1.0 TiB 81.79 1.23 214     up                     osd.22
>>>   23   hdd   5.51459  1.00000 5.5 TiB 4.4 TiB 4.4 TiB 4.4 MiB  16
>>> GiB  1.1 TiB 80.29 1.21 210     up                     osd.23
>>>   24   hdd   5.51459  1.00000 5.5 TiB 4.3 TiB 4.2 TiB 6.7 MiB  14
>>> GiB  1.3 TiB 77.31 1.16 202     up                     osd.24
>>>   29   hdd   5.51459  1.00000 5.5 TiB 4.2 TiB 4.2 TiB 4.6 MiB  14
>>> GiB  1.3 TiB 76.86 1.16 201     up                     osd.29
>>>   35   ssd   0.34239  1.00000 351 GiB  10 GiB 208 MiB 8.1 GiB 1.9
>>> GiB  340 GiB  2.89 0.04 110     up                     osd.35
>>>                         TOTAL 195 TiB 130 TiB 128 TiB  58 GiB 476
>>> GiB   66 TiB 66.45
>>> MIN/MAX VAR: 0.04/1.24  STDDEV: 26.74
>>>
>>>
>>> better only for the class hdd
>>>
>>> $ ceph osd df tree class hdd
>>> ID  CLASS WEIGHT    REWEIGHT SIZE    RAW USE DATA    OMAP    META   
>>> AVAIL    %USE  VAR  PGS STATUS TYPE NAME
>>>   -1       195.40730        - 193 TiB 130 TiB 128 TiB 169 MiB 462
>>> GiB   63 TiB 67.24 1.00   -        root default
>>> -25       195.40730        - 193 TiB 130 TiB 128 TiB 169 MiB 462
>>> GiB   63 TiB 67.24 1.00   -            room PRZ
>>> -26       195.40730        - 193 TiB 130 TiB 128 TiB 169 MiB 462
>>> GiB   63 TiB 67.24 1.00   -                row rechts
>>> -27        83.74599        -  83 TiB  57 TiB  56 TiB  81 MiB 200
>>> GiB   26 TiB 68.31 1.02   -                    rack 1-eins
>>>   -3        27.91533        -  28 TiB  18 TiB  17 TiB  31 MiB  64
>>> GiB   10 TiB 63.54 0.94   -                        host onode1
>>>    0   hdd   5.51459  1.00000 5.5 TiB 3.4 TiB 3.4 TiB 3.8 MiB  14
>>> GiB  2.1 TiB 62.48 0.93 163     up                     osd.0
>>>    1   hdd   5.51459  1.00000 5.5 TiB 3.4 TiB 3.4 TiB 6.5 MiB  12
>>> GiB  2.1 TiB 62.47 0.93 163     up                     osd.1
>>>    2   hdd   5.51459  1.00000 5.5 TiB 3.4 TiB 3.4 TiB 7.1 MiB  12
>>> GiB  2.1 TiB 62.53 0.93 163     up                     osd.2
>>>    3   hdd   5.51459  1.00000 5.5 TiB 3.5 TiB 3.4 TiB 7.5 MiB  12
>>> GiB  2.0 TiB 62.90 0.94 164     up                     osd.3
>>>   37   hdd   5.51459  1.00000 5.5 TiB 3.7 TiB 3.7 TiB 6.4 MiB  13
>>> GiB  1.8 TiB 67.32 1.00 176     up                     osd.37
>>> -13        27.91533        -  28 TiB  17 TiB  17 TiB  21 MiB  64
>>> GiB   10 TiB 63.38 0.94   -                        host onode4
>>>   13   hdd   5.51459  1.00000 5.5 TiB 3.4 TiB 3.4 TiB 4.0 MiB  13
>>> GiB  2.1 TiB 62.49 0.93 163     up                     osd.13
>>>   14   hdd   5.51459  1.00000 5.5 TiB 3.4 TiB 3.4 TiB 3.2 MiB  13
>>> GiB  2.1 TiB 62.49 0.93 163     up                     osd.14
>>>   15   hdd   5.51459  1.00000 5.5 TiB 3.4 TiB 3.4 TiB 4.4 MiB  12
>>> GiB  2.1 TiB 62.43 0.93 163     up                     osd.15
>>>   16   hdd   5.51459  1.00000 5.5 TiB 3.4 TiB 3.4 TiB 5.8 MiB  12
>>> GiB  2.1 TiB 62.13 0.92 162     up                     osd.16
>>>   40   hdd   5.51459  1.00000 5.5 TiB 3.7 TiB 3.7 TiB 3.6 MiB  13
>>> GiB  1.8 TiB 67.36 1.00 176     up                     osd.40
>>> -22        27.91533        -  28 TiB  22 TiB  21 TiB  28 MiB  72
>>> GiB  6.1 TiB 78.02 1.16   -                        host onode7
>>>   25   hdd   5.51459  1.00000 5.5 TiB 4.3 TiB 4.2 TiB 7.2 MiB  14
>>> GiB  1.2 TiB 77.59 1.15 203     up                     osd.25
>>>   26   hdd   5.51459  1.00000 5.5 TiB 4.3 TiB 4.3 TiB 4.7 MiB  14
>>> GiB  1.2 TiB 78.40 1.17 205     up                     osd.26
>>>   27   hdd   5.51459  1.00000 5.5 TiB 4.2 TiB 4.1 TiB 3.8 MiB  14
>>> GiB  1.3 TiB 75.80 1.13 198     up                     osd.27
>>>   28   hdd   5.51459  1.00000 5.5 TiB 4.2 TiB 4.1 TiB 4.5 MiB  14
>>> GiB  1.3 TiB 76.13 1.13 199     up                     osd.28
>>>   30   hdd   5.51459  1.00000 5.5 TiB 4.5 TiB 4.5 TiB 8.2 MiB  15
>>> GiB 1006 GiB 82.18 1.22 215     up                     osd.30
>>> -28        55.83066        -  55 TiB  35 TiB  34 TiB  38 MiB 128
>>> GiB   20 TiB 63.09 0.94   -                    rack 2-zwei
>>>   -7        27.91533        -  28 TiB  17 TiB  17 TiB  18 MiB  63
>>> GiB   10 TiB 63.01 0.94   -                        host onode2
>>>    5   hdd   5.51459  1.00000 5.5 TiB 3.4 TiB 3.4 TiB 4.0 MiB  12
>>> GiB  2.1 TiB 62.08 0.92 162     up                     osd.5
>>>    6   hdd   5.51459  1.00000 5.5 TiB 3.4 TiB 3.4 TiB 3.9 MiB  13
>>> GiB  2.1 TiB 62.13 0.92 162     up                     osd.6
>>>    7   hdd   5.51459  1.00000 5.5 TiB 3.4 TiB 3.3 TiB 3.7 MiB  12
>>> GiB  2.1 TiB 61.77 0.92 161     up                     osd.7
>>>    8   hdd   5.51459  1.00000 5.5 TiB 3.4 TiB 3.3 TiB 3.2 MiB  12
>>> GiB  2.1 TiB 61.75 0.92 161     up                     osd.8
>>>   38   hdd   5.51459  1.00000 5.5 TiB 3.7 TiB 3.7 TiB 3.7 MiB  14
>>> GiB  1.8 TiB 67.31 1.00 176     up                     osd.38
>>> -16        27.91533        -  28 TiB  17 TiB  17 TiB  20 MiB  65
>>> GiB   10 TiB 63.18 0.94   -                        host onode5
>>>   17   hdd   5.51459  1.00000 5.5 TiB 3.4 TiB 3.4 TiB   4 MiB  12
>>> GiB  2.1 TiB 62.15 0.92 162     up                     osd.17
>>>   18   hdd   5.51459  1.00000 5.5 TiB 3.4 TiB 3.4 TiB 4.0 MiB  13
>>> GiB  2.1 TiB 62.16 0.92 162     up                     osd.18
>>>   19   hdd   5.51459  1.00000 5.5 TiB 3.4 TiB 3.4 TiB 4.5 MiB  13
>>> GiB  2.1 TiB 62.14 0.92 162     up                     osd.19
>>>   20   hdd   5.51459  1.00000 5.5 TiB 3.4 TiB 3.4 TiB 3.5 MiB  13
>>> GiB  2.1 TiB 62.12 0.92 162     up                     osd.20
>>>   41   hdd   5.51459  1.00000 5.5 TiB 3.7 TiB 3.7 TiB 3.9 MiB  14
>>> GiB  1.8 TiB 67.31 1.00 176     up                     osd.41
>>> -29        55.83066        -  55 TiB  38 TiB  38 TiB  49 MiB 134
>>> GiB   17 TiB 69.77 1.04   -                    rack 3-drei
>>> -10        27.91533        -  28 TiB  17 TiB  17 TiB  22 MiB  62
>>> GiB   11 TiB 61.74 0.92   -                        host onode3
>>>    9   hdd   5.51459  1.00000 5.5 TiB 3.3 TiB 3.3 TiB 3.7 MiB  12
>>> GiB  2.2 TiB 60.63 0.90 158     up                     osd.9
>>>   10   hdd   5.51459  1.00000 5.5 TiB 3.3 TiB 3.3 TiB 3.1 MiB  12
>>> GiB  2.2 TiB 60.19 0.90 157     up                     osd.10
>>>   11   hdd   5.51459  1.00000 5.5 TiB 3.3 TiB 3.3 TiB 6.7 MiB  12
>>> GiB  2.2 TiB 60.27 0.90 157     up                     osd.11
>>>   12   hdd   5.51459  1.00000 5.5 TiB 3.3 TiB 3.3 TiB 4.1 MiB  12
>>> GiB  2.2 TiB 60.28 0.90 157     up                     osd.12
>>>   39   hdd   5.51459  1.00000 5.5 TiB 3.7 TiB 3.7 TiB 4.6 MiB  13
>>> GiB  1.8 TiB 67.34 1.00 176     up                     osd.39
>>> -19        27.91533        -  28 TiB  21 TiB  21 TiB  27 MiB  72
>>> GiB  6.1 TiB 77.81 1.16   -                        host onode6
>>>   21   hdd   5.51459  1.00000 5.5 TiB 4.0 TiB 4.0 TiB 6.2 MiB  13
>>> GiB  1.5 TiB 72.79 1.08 190     up                     osd.21
>>>   22   hdd   5.51459  1.00000 5.5 TiB 4.5 TiB 4.5 TiB 5.1 MiB  16
>>> GiB  1.0 TiB 81.79 1.22 214     up                     osd.22
>>>   23   hdd   5.51459  1.00000 5.5 TiB 4.4 TiB 4.4 TiB 4.4 MiB  16
>>> GiB  1.1 TiB 80.29 1.19 210     up                     osd.23
>>>   24   hdd   5.51459  1.00000 5.5 TiB 4.3 TiB 4.2 TiB 6.7 MiB  14
>>> GiB  1.3 TiB 77.31 1.15 202     up                     osd.24
>>>   29   hdd   5.51459  1.00000 5.5 TiB 4.2 TiB 4.2 TiB 4.6 MiB  14
>>> GiB  1.3 TiB 76.86 1.14 201     up                     osd.29
>>>                         TOTAL 193 TiB 130 TiB 128 TiB 169 MiB 462
>>> GiB   63 TiB 67.24
>>> MIN/MAX VAR: 0.90/1.22  STDDEV: 7.17
>>>
>>>
>>>
>>>
>>> ceph osd pool ls detail
>>> pool 1 'cephfs_data' erasure size 6 min_size 5 crush_rule 1
>>> object_hash rjenkins pg_num 1024 pgp_num 1024 autoscale_mode on
>>> last_change 20353 lfor 0/0/2366 flags
>>> hashpspool,ec_overwrites,selfmanaged_snaps max_bytes 119457034600410
>>> stripe_width 16384 target_size_ratio 0.85 application cephfs
>>>     removed_snaps
>>> [2~4,7~27,2f~1e,4f~1f,6f~39,a9~5,af~1,b1~1,b3~1,b5~1,b7~1,b9~1,bb~1,bd~1,bf~1,c1~1,c3~1,c5~1,c7~1,c9~1]
>>> pool 2 'cephfs_metadata' replicated size 3 min_size 2 crush_rule 2
>>> object_hash rjenkins pg_num 256 pgp_num 256 autoscale_mode on
>>> last_change 261 lfor 0/0/259 flags hashpspool stripe_width 0
>>> pg_autoscale_bias 4 pg_num_min 16 recovery_priority 5
>>> target_size_ratio 0.3 application cephfs
>>>
>>>
>>> $ ceph osd crush rule dump
>>> [
>>>      {
>>>          "rule_id": 0,
>>>          "rule_name": "replicated_rule",
>>>          "ruleset": 0,
>>>          "type": 1,
>>>          "min_size": 1,
>>>          "max_size": 10,
>>>          "steps": [
>>>              {
>>>                  "op": "take",
>>>                  "item": -1,
>>>                  "item_name": "default"
>>>              },
>>>              {
>>>                  "op": "chooseleaf_firstn",
>>>                  "num": 0,
>>>                  "type": "host"
>>>              },
>>>              {
>>>                  "op": "emit"
>>>              }
>>>          ]
>>>      },
>>>      {
>>>          "rule_id": 1,
>>>          "rule_name": "cephfs_data",
>>>          "ruleset": 1,
>>>          "type": 3,
>>>          "min_size": 3,
>>>          "max_size": 6,
>>>          "steps": [
>>>              {
>>>                  "op": "set_chooseleaf_tries",
>>>                  "num": 5
>>>              },
>>>              {
>>>                  "op": "set_choose_tries",
>>>                  "num": 100
>>>              },
>>>              {
>>>                  "op": "take",
>>>                  "item": -2,
>>>                  "item_name": "default~hdd"
>>>              },
>>>              {
>>>                  "op": "chooseleaf_indep",
>>>                  "num": 0,
>>>                  "type": "host"
>>>              },
>>>              {
>>>                  "op": "emit"
>>>              }
>>>          ]
>>>      },
>>>      {
>>>          "rule_id": 2,
>>>          "rule_name": "rep_3_ssd",
>>>          "ruleset": 2,
>>>          "type": 1,
>>>          "min_size": 1,
>>>          "max_size": 10,
>>>          "steps": [
>>>              {
>>>                  "op": "take",
>>>                  "item": -6,
>>>                  "item_name": "default~ssd"
>>>              },
>>>              {
>>>                  "op": "chooseleaf_firstn",
>>>                  "num": 0,
>>>                  "type": "host"
>>>              },
>>>              {
>>>                  "op": "emit"
>>>              }
>>>          ]
>>>      }
>>> ]
>>>
>>>
>>> Tanks,
>>> Lars
>>> _______________________________________________
>>> ceph-users mailing list -- ceph-users@xxxxxxx
>>> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>>
>>
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux