Re: RESEND: Re: PG Balancer Upmap mode not working

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello David,

happy new year to you and everyone joining this conversation.

I have ensured that "weight_set" is removed, means I executed this command:
ceph osd crush weight-set rm-compat

However I'm not sure how to validate if this command was successfull.

Anyway, I have noticed 2 things (after running this commad). And these
findings did occur last year (before christmas) and now, 2 weeks later.
1. Execution of ceph balancer status is runnig for minutes
root@ld3955:~# date && time ceph balancer status
Mon Dec 23 10:06:12 CET 2019
{
    "active": true,
    "plans": [],
    "mode": "upmap"
}

real    1m45,045s
user    0m0,315s
sys     0m0,026s
root@ld3955:~# date && time ceph balancer status
Tue Jan  7 08:11:24 CET 2020
^CInterrupted
Traceback (most recent call last):
  File "/usr/bin/ceph", line 1263, in <module>
    retval = main()
  File "/usr/bin/ceph", line 1194, in main
    verbose)
  File "/usr/bin/ceph", line 619, in new_style_command
    ret, outbuf, outs = do_command(parsed_args, target, cmdargs,
sigdict, inbuf, verbose)
  File "/usr/bin/ceph", line 593, in do_command
    return ret, '', ''
UnboundLocalError: local variable 'ret' referenced before assignment

real    102m44,084s
user    0m2,404s
sys     0m1,065s

2. Ceph balancer is not doing anything although the PGs are not balanced
root@ld3955:~# ceph osd df class hdd-strgbx  | awk '{ print "osd."$1,
"size: "$5, "usage: " $17, "reweight: "$4 }' | sort -nk5 | grep 1.6 |
head -n 10
osd.205 size: 1.6 usage: 54.54 reweight: 1.00000
osd.243 size: 1.6 usage: 54.65 reweight: 1.00000
osd.100 size: 1.6 usage: 54.66 reweight: 1.00000
osd.154 size: 1.6 usage: 55.42 reweight: 1.00000
osd.106 size: 1.6 usage: 55.44 reweight: 1.00000
osd.262 size: 1.6 usage: 55.50 reweight: 1.00000
osd.200 size: 1.6 usage: 55.61 reweight: 1.00000
osd.255 size: 1.6 usage: 56.28 reweight: 1.00000
osd.108 size: 1.6 usage: 56.34 reweight: 1.00000
osd.201 size: 1.6 usage: 56.43 reweight: 1.00000
root@ld3955:~# ceph osd df class hdd-strgbx  | awk '{ print "osd."$1,
"size: "$5, "usage: " $17, "reweight: "$4 }' | sort -nk5 | grep 1.6 |
tail -n 10
osd.51 size: 1.6 usage: 79.61 reweight: 1.00000
osd.250 size: 1.6 usage: 79.65 reweight: 0.89999
osd.124 size: 1.6 usage: 79.68 reweight: 1.00000
osd.237 size: 1.6 usage: 79.68 reweight: 1.00000
osd.197 size: 1.6 usage: 79.77 reweight: 0.89999
osd.216 size: 1.6 usage: 80.28 reweight: 1.00000
osd.50 size: 1.6 usage: 80.38 reweight: 0.89999
osd.101 size: 1.6 usage: 80.65 reweight: 1.00000
osd.136 size: 1.6 usage: 81.41 reweight: 1.00000
osd.105 size: 1.6 usage: 82.09 reweight: 1.00000

3. Offline optimization with osdmaptool is failing

Can you please advise how to troubleshoot the balancer?

Regards
Thomas


Am 21.12.2019 um 03:23 schrieb David Zafman:
>
> Offline optimization is using the same underlying code that the
> ceph-mgr does.  So it should for the most part produce the same results.
>
> There is a special weight stored as "weight_set" in crush that is set
> by the crush-compat balancer.  Not sure of the commands but these
> should be removed before testing the upmap balancer.
>
> David
>
> On 12/20/19 1:17 AM, Thomas Schneider wrote:
>> Hello David,
>>
>> many thanks for testing the new balancer code with my OSDmap.
>>
>> I have completed task to set reweight of any OSD to 1.0 and then I
>> enable balancer.
>> However there's no change to the PG distribution on my critical pool.
>>
>> Therefore I started offline optimization with osdmaptool following these
>> <https://docs.ceph.com/docs/master/rados/operations/upmap/>
>> instructions.
>>
>> I enable debug to see more details and this is the last output; there's
>> no optimization possible although the OSDs are heavily unbalanced.
>> 2019-12-20 10:05:06.256 7f91a9c86ac0 10  osd.255 target 59.1325
>> deviation -0.132507 -> ratio -0.00224085 < max ratio 0.001
>> 2019-12-20 10:05:06.256 7f91a9c86ac0 10  failed to find any changes for
>> overfull osds
>> 2019-12-20 10:05:06.256 7f91a9c86ac0 10  will try dropping existing
>> remapping pair 400 -> 404 which remapped 11.22ef out from underfull
>> osd.400
>> 2019-12-20 10:05:06.256 7f91a9c86ac0 10  existing pg_upmap_items
>> [400,404] remapped 11.22ef out from underfull osd.400, will try
>> cancelling it entirely
>> 2019-12-20 10:05:06.256 7f91a9c86ac0 10  stddev 158655 -> 158731
>> 2019-12-20 10:05:06.256 7f91a9c86ac0 10  hit local_fallback_retries 100
>> 2019-12-20 10:05:06.260 7f91a9c86ac0 10  overfull
>> 42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,101,102,103,104,105,107,108,109,110,111,112,113,114,115,116,117,118,119,120,121,122,123,124,125,126,127,128,129,130,131,132,133,134,135,136,137
>>
>> ,138,139,140,141,142,143,144,145,146,147,148,149,150,151,152,153,155,156,157,158,159,160,161,162,163,164,165,166,167,168,169,170,171,189,190,191,192,193,194,195,196,197,198,199,201,202,203,204,206,207,208,209,210,211,212,213,214,215,216,217,218,219,220,221,222,223,224,225,226,227,228,229,230,231,232,233,234,235,236
>>
>> ,237,238,239,240,241,242,244,245,246,247,248,249,250,251,252,253,254,256,257,258,259,260,261,263,264,265,266,267
>>
>> underfull
>> [362,361,352,327,359,356,417,340,403,306,304,309,318,328,334,381,399,401,404,414,280,293,297,383,393,294,312,411,277,355,371,285,300,347,365,394,398,409,412,415,302,342,323,402,278,281,339,351,
>>
>> 374,379,324,380,273,353,382,392,410,299,335,390,413,384,397,272,275,360,389,282,313,326,363,396,286,385,288,289,290,320,322,391,331,333,305,378,408,316,319,337,387,418,388,317,364,344,376,295,308,386,330,343,367,345,296,298,310,354,373,395,279,329,338,358,366,406,407,314,348,303,375,311,332,341,346,377,301,350,416,
>>
>> 274,284,321,372,287,292,307,325,336,400]
>> 2019-12-20 10:05:06.268 7f91a9c86ac0 10  skipping overfull
>> 2019-12-20 10:05:06.268 7f91a9c86ac0 10  failed to find any changes for
>> overfull osds
>> 2019-12-20 10:05:06.272 7f91a9c86ac0 10  will try dropping existing
>> remapping pair 338 -> 359 which remapped 11.1c08 out from underfull
>> osd.338
>> 2019-12-20 10:05:06.272 7f91a9c86ac0 10  existing pg_upmap_items
>> [338,359] remapped 11.1c08 out from underfull osd.338, will try
>> cancelling it entirely
>> 2019-12-20 10:05:06.272 7f91a9c86ac0 10  stddev 158655 -> 158729
>> 2019-12-20 10:05:06.276 7f91a9c86ac0 10  skipping overfull
>> 2019-12-20 10:05:06.276 7f91a9c86ac0 10  failed to find any changes for
>> overfull osds
>> 2019-12-20 10:05:06.280 7f91a9c86ac0 10  will try dropping existing
>> remapping pair 338 -> 356 which remapped 11.3008 out from underfull
>> osd.338
>> 2019-12-20 10:05:06.280 7f91a9c86ac0 10  existing pg_upmap_items
>> [338,356] remapped 11.3008 out from underfull osd.338, will try
>> cancelling it entirely
>> 2019-12-20 10:05:06.280 7f91a9c86ac0 10  stddev 158655 -> 158725
>> 2019-12-20 10:05:06.280 7f91a9c86ac0 10  skipping overfull
>> 2019-12-20 10:05:06.280 7f91a9c86ac0 10  failed to find any changes for
>> overfull osds
>> 2019-12-20 10:05:06.288 7f91a9c86ac0 10  will try dropping existing
>> remapping pair 338 -> 352 which remapped 11.2593 out from underfull
>> osd.338
>> 2019-12-20 10:05:06.288 7f91a9c86ac0 10  existing pg_upmap_items
>> [338,352] remapped 11.2593 out from underfull osd.338, will try
>> cancelling it entirely
>> 2019-12-20 10:05:06.288 7f91a9c86ac0 10  stddev 158655 -> 158731
>> 2019-12-20 10:05:06.288 7f91a9c86ac0 10  skipping overfull
>> 2019-12-20 10:05:06.288 7f91a9c86ac0 10  failed to find any changes for
>> overfull osds
>> 2019-12-20 10:05:06.292 7f91a9c86ac0 10  will try dropping existing
>> remapping pair 338 -> 356 which remapped 11.216 out from underfull
>> osd.338
>> 2019-12-20 10:05:06.292 7f91a9c86ac0 10  existing pg_upmap_items
>> [338,356] remapped 11.216 out from underfull osd.338, will try
>> cancelling it entirely
>> 2019-12-20 10:05:06.292 7f91a9c86ac0 10  stddev 158655 -> 158725
>> 2019-12-20 10:05:06.296 7f91a9c86ac0 10  skipping overfull
>> 2019-12-20 10:05:06.296 7f91a9c86ac0 10  failed to find any changes for
>> overfull osds
>> 2019-12-20 10:05:06.300 7f91a9c86ac0 10  will try dropping existing
>> remapping pair 332 -> 356 which remapped 11.3cbf out from underfull
>> osd.332
>> 2019-12-20 10:05:06.300 7f91a9c86ac0 10  existing pg_upmap_items
>> [332,356] remapped 11.3cbf out from underfull osd.332, will try
>> cancelling it entirely
>> 2019-12-20 10:05:06.300 7f91a9c86ac0 10  stddev 158655 -> 158731
>> 2019-12-20 10:05:06.304 7f91a9c86ac0 10  skipping overfull
>> 2019-12-20 10:05:06.304 7f91a9c86ac0 10  failed to find any changes for
>> overfull osds
>> 2019-12-20 10:05:06.308 7f91a9c86ac0 10  will try dropping existing
>> remapping pair 332 -> 327 which remapped 11.1a66 out from underfull
>> osd.332
>> 2019-12-20 10:05:06.308 7f91a9c86ac0 10  existing pg_upmap_items
>> [332,327] remapped 11.1a66 out from underfull osd.332, will try
>> cancelling it entirely
>> 2019-12-20 10:05:06.308 7f91a9c86ac0 10  stddev 158655 -> 158735
>> 2019-12-20 10:05:06.312 7f91a9c86ac0 10  skipping overfull
>> 2019-12-20 10:05:06.312 7f91a9c86ac0 10  failed to find any changes for
>> overfull osds
>> 2019-12-20 10:05:06.316 7f91a9c86ac0 10  will try dropping existing
>> remapping pair 332 -> 359 which remapped 11.20b9 out from underfull
>> osd.332
>> 2019-12-20 10:05:06.316 7f91a9c86ac0 10  existing pg_upmap_items
>> [332,359] remapped 11.20b9 out from underfull osd.332, will try
>> cancelling it entirely
>> 2019-12-20 10:05:06.316 7f91a9c86ac0 10  stddev 158655 -> 158735
>> 2019-12-20 10:05:06.320 7f91a9c86ac0 10  skipping overfull
>> 2019-12-20 10:05:06.320 7f91a9c86ac0 10  failed to find any changes for
>> overfull osds
>> 2019-12-20 10:05:06.324 7f91a9c86ac0 10  will try dropping existing
>> remapping pair 332 -> 352 which remapped 11.847 out from underfull
>> osd.332
>> 2019-12-20 10:05:06.324 7f91a9c86ac0 10  existing pg_upmap_items
>> [332,352] remapped 11.847 out from underfull osd.332, will try
>> cancelling it entirely
>> 2019-12-20 10:05:06.324 7f91a9c86ac0 10  stddev 158655 -> 158737
>> 2019-12-20 10:05:06.328 7f91a9c86ac0 10  skipping overfull
>> 2019-12-20 10:05:06.328 7f91a9c86ac0 10  failed to find any changes for
>> overfull osds
>> 2019-12-20 10:05:06.332 7f91a9c86ac0 10  will try dropping existing
>> remapping pair 332 -> 352 which remapped 11.1847 out from underfull
>> osd.332
>> 2019-12-20 10:05:06.332 7f91a9c86ac0 10  existing pg_upmap_items
>> [332,352] remapped 11.1847 out from underfull osd.332, will try
>> cancelling it entirely
>> 2019-12-20 10:05:06.332 7f91a9c86ac0 10  stddev 158655 -> 158737
>> 2019-12-20 10:05:06.336 7f91a9c86ac0 10  skipping overfull
>> 2019-12-20 10:05:06.336 7f91a9c86ac0 10  failed to find any changes for
>> overfull osds
>> 2019-12-20 10:05:06.340 7f91a9c86ac0 10  will try dropping existing
>> remapping pair 332 -> 356 which remapped 11.25be out from underfull
>> osd.332
>> 2019-12-20 10:05:06.340 7f91a9c86ac0 10  existing pg_upmap_items
>> [332,356] remapped 11.25be out from underfull osd.332, will try
>> cancelling it entirely
>> 2019-12-20 10:05:06.340 7f91a9c86ac0 10  stddev 158655 -> 158731
>> 2019-12-20 10:05:06.344 7f91a9c86ac0 10  skipping overfull
>> [...]
>> 2019-12-20 10:05:07.032 7f91a9c86ac0 10  skipping overfull
>> 2019-12-20 10:05:07.032 7f91a9c86ac0 10  failed to find any changes for
>> overfull osds
>> 2019-12-20 10:05:07.032 7f91a9c86ac0 10  will try dropping existing
>> remapping pair 400 -> 393 which remapped 11.ce6 out from underfull
>> osd.400
>> 2019-12-20 10:05:07.032 7f91a9c86ac0 10  existing pg_upmap_items
>> [400,393] remapped 11.ce6 out from underfull osd.400, will try
>> cancelling it entirely
>> 2019-12-20 10:05:07.032 7f91a9c86ac0 10  stddev 158655 -> 158729
>> 2019-12-20 10:05:07.036 7f91a9c86ac0 10  skipping overfull
>> 2019-12-20 10:05:07.036 7f91a9c86ac0 10  failed to find any changes for
>> overfull osds
>> 2019-12-20 10:05:07.040 7f91a9c86ac0 10  will try dropping existing
>> remapping pair 400 -> 371 which remapped 11.18b6 out from underfull
>> osd.400
>> 2019-12-20 10:05:07.040 7f91a9c86ac0 10  existing pg_upmap_items
>> [400,371] remapped 11.18b6 out from underfull osd.400, will try
>> cancelling it entirely
>> 2019-12-20 10:05:07.040 7f91a9c86ac0 10  stddev 158655 -> 158725
>> 2019-12-20 10:05:07.044 7f91a9c86ac0 10  skipping overfull
>> 2019-12-20 10:05:07.044 7f91a9c86ac0 10  failed to find any changes for
>> overfull osds
>> 2019-12-20 10:05:07.048 7f91a9c86ac0 10  will try dropping existing
>> remapping pair 400 -> 371 which remapped 11.1d3e out from underfull
>> osd.400
>> 2019-12-20 10:05:07.048 7f91a9c86ac0 10  existing pg_upmap_items
>> [400,371] remapped 11.1d3e out from underfull osd.400, will try
>> cancelling it entirely
>> 2019-12-20 10:05:07.048 7f91a9c86ac0 10  stddev 158655 -> 158725
>> 2019-12-20 10:05:07.052 7f91a9c86ac0 10  skipping overfull
>> 2019-12-20 10:05:07.052 7f91a9c86ac0 10  failed to find any changes for
>> overfull osds
>> 2019-12-20 10:05:07.056 7f91a9c86ac0 10  will try dropping existing
>> remapping pair 400 -> 393 which remapped 11.2831 out from underfull
>> osd.400
>> 2019-12-20 10:05:07.056 7f91a9c86ac0 10  existing pg_upmap_items
>> [400,393] remapped 11.2831 out from underfull osd.400, will try
>> cancelling it entirely
>> 2019-12-20 10:05:07.056 7f91a9c86ac0 10  stddev 158655 -> 158729
>> 2019-12-20 10:05:07.060 7f91a9c86ac0 10  skipping overfull
>> 2019-12-20 10:05:07.060 7f91a9c86ac0 10  failed to find any changes for
>> overfull osds
>> 2019-12-20 10:05:07.060 7f91a9c86ac0 10  will try dropping existing
>> remapping pair 400 -> 371 which remapped 11.2e5e out from underfull
>> osd.400
>> 2019-12-20 10:05:07.060 7f91a9c86ac0 10  existing pg_upmap_items
>> [400,371] remapped 11.2e5e out from underfull osd.400, will try
>> cancelling it entirely
>> 2019-12-20 10:05:07.060 7f91a9c86ac0 10  stddev 158655 -> 158725
>> 2019-12-20 10:05:07.060 7f91a9c86ac0 10  hit local_fallback_retries 100
>> 2019-12-20 10:05:07.064 7f91a9c86ac0 10  num_changed = 0
>> no upmaps proposed
>>
>> Why is offline optimization failing?
>>
>> Regards
>> Thomas
>>
>> Am 12.12.2019 um 01:34 schrieb David Zafman:
>>> Thomas,
>>>
>>> I have a master branch version of the code to test.  The nautilus
>>> backport https://github.com/ceph/ceph/pull/31956 should be the same.
>>>
>>> Using your OSDMap, the code in master branch and some additional
>>> changes to osdmaptool I was able to balance your cluster.  The
>>> osdmaptool changes simulate the mgr active balancer behavior.   It
>>> took 302 rounds with 10 maximum upmaps per crush rule set of pools per
>>> round.  With the default 1 minute sleeps inside the mgr it would take
>>> about 5 hours.  Obviously, recovery/backfill has to finish before the
>>> cluster settles into the new configuration.  It needed 3402 additional
>>> upmaps.
>>>
>>> Because all pools for a given crush rule are balanced together you can
>>> see that this is more balanced than Rich's configuration using
>>> Luminous. The pool hdb_backup which has its own rule was the most
>>> difficult to balance.
>>>
>>> This balancer code is subject to change before final release of the
>>> next Nautilus point release.
>>>
>>>
>>> Final layout:
>>>
>>> osd.0 pgs 33
>>> osd.1 pgs 31
>>> osd.2 pgs 34
>>> osd.3 pgs 33
>>> osd.4 pgs 31
>>> osd.5 pgs 31
>>> osd.6 pgs 32
>>> osd.7 pgs 31
>>> osd.8 pgs 264
>>> osd.9 pgs 28
>>> osd.10 pgs 29
>>> osd.11 pgs 28
>>> osd.12 pgs 263
>>> osd.13 pgs 27
>>> osd.14 pgs 27
>>> osd.15 pgs 27
>>> osd.16 pgs 262
>>> osd.17 pgs 27
>>> osd.18 pgs 27
>>> osd.19 pgs 262
>>> osd.20 pgs 28
>>> osd.21 pgs 27
>>> osd.22 pgs 28
>>> osd.23 pgs 29
>>> osd.24 pgs 29
>>> osd.25 pgs 28
>>> osd.26 pgs 263
>>> osd.27 pgs 28
>>> osd.28 pgs 28
>>> osd.29 pgs 27
>>> osd.30 pgs 262
>>> osd.31 pgs 86
>>> osd.32 pgs 86
>>> osd.33 pgs 86
>>> osd.34 pgs 86
>>> osd.35 pgs 87
>>> osd.36 pgs 86
>>> osd.37 pgs 86
>>> osd.38 pgs 86
>>> osd.39 pgs 86
>>> osd.40 pgs 86
>>> osd.41 pgs 86
>>> osd.42 pgs 49
>>> osd.43 pgs 62
>>> osd.44 pgs 62
>>> osd.45 pgs 56
>>> osd.46 pgs 49
>>> osd.47 pgs 56
>>> osd.48 pgs 56
>>> osd.49 pgs 56
>>> osd.50 pgs 49
>>> osd.51 pgs 62
>>> osd.52 pgs 56
>>> osd.53 pgs 43
>>> osd.54 pgs 55
>>> osd.55 pgs 56
>>> osd.56 pgs 49
>>> osd.57 pgs 49
>>> osd.58 pgs 49
>>> osd.59 pgs 86
>>> osd.60 pgs 86
>>> osd.61 pgs 86
>>> osd.62 pgs 86
>>> osd.63 pgs 86
>>> osd.64 pgs 87
>>> osd.65 pgs 86
>>> osd.66 pgs 87
>>> osd.67 pgs 87
>>> osd.68 pgs 87
>>> osd.69 pgs 86
>>> osd.70 pgs 86
>>> osd.71 pgs 86
>>> osd.72 pgs 86
>>> osd.73 pgs 87
>>> osd.74 pgs 87
>>> osd.75 pgs 86
>>> osd.76 pgs 49
>>> osd.77 pgs 49
>>> osd.78 pgs 55
>>>
>>> osd.79 pgs 49
>>> osd.80 pgs 62
>>> osd.81 pgs 62
>>> osd.82 pgs 62
>>> osd.83 pgs 49
>>> osd.84 pgs 56
>>> osd.85 pgs 61
>>> osd.86 pgs 49
>>> osd.87 pgs 61
>>> osd.88 pgs 62
>>> osd.89 pgs 43
>>> osd.90 pgs 49
>>> osd.91 pgs 55
>>> osd.92 pgs 62
>>> osd.93 pgs 61
>>> osd.94 pgs 55
>>> osd.95 pgs 56
>>> osd.96 pgs 49
>>> osd.97 pgs 43
>>> osd.98 pgs 49
>>> osd.99 pgs 56
>>> osd.100 pgs 43
>>> osd.101 pgs 49
>>> osd.102 pgs 43
>>> osd.103 pgs 55
>>> osd.104 pgs 43
>>> osd.105 pgs 43
>>> osd.106 pgs 62
>>> osd.107 pgs 49
>>> osd.108 pgs 62
>>> osd.109 pgs 62
>>> osd.110 pgs 62
>>> osd.111 pgs 62
>>> osd.112 pgs 49
>>> osd.113 pgs 49
>>> osd.114 pgs 49
>>> osd.115 pgs 49
>>> osd.116 pgs 62
>>> osd.117 pgs 56
>>> osd.118 pgs 55
>>> osd.119 pgs 49
>>> osd.120 pgs 43
>>> osd.121 pgs 55
>>> osd.122 pgs 31
>>> osd.123 pgs 56
>>> osd.124 pgs 62
>>> osd.125 pgs 62
>>> osd.126 pgs 62
>>> osd.127 pgs 49
>>> osd.128 pgs 49
>>> osd.129 pgs 56
>>> osd.130 pgs 61
>>> osd.131 pgs 55
>>> osd.132 pgs 55
>>> osd.133 pgs 55
>>> osd.134 pgs 49
>>> osd.135 pgs 55
>>> osd.136 pgs 55
>>> osd.137 pgs 43
>>> osd.138 pgs 55
>>> osd.139 pgs 55
>>> osd.140 pgs 49
>>> osd.141 pgs 43
>>> osd.142 pgs 55
>>> osd.143 pgs 55
>>> osd.144 pgs 55
>>> osd.145 pgs 55
>>> osd.146 pgs 55
>>> osd.147 pgs 62
>>> osd.148 pgs 55
>>> osd.149 pgs 62
>>> osd.150 pgs 49
>>> osd.151 pgs 55
>>> osd.152 pgs 55
>>> osd.153 pgs 49
>>> osd.154 pgs 49
>>> osd.155 pgs 55
>>> osd.156 pgs 43
>>> osd.157 pgs 49
>>>
>>> osd.157 pgs 49
>>> osd.158 pgs 62
>>> osd.159 pgs 49
>>> osd.160 pgs 55
>>> osd.161 pgs 49
>>> osd.162 pgs 62
>>> osd.163 pgs 49
>>> osd.164 pgs 55
>>> osd.165 pgs 62
>>> osd.166 pgs 49
>>> osd.167 pgs 43
>>> osd.168 pgs 49
>>> osd.169 pgs 61
>>> osd.170 pgs 62
>>> osd.171 pgs 55
>>> osd.172 pgs 87
>>> osd.173 pgs 87
>>> osd.174 pgs 87
>>> osd.175 pgs 87
>>> osd.176 pgs 87
>>> osd.177 pgs 86
>>> osd.178 pgs 86
>>> osd.179 pgs 87
>>> osd.180 pgs 86
>>> osd.181 pgs 86
>>> osd.182 pgs 87
>>> osd.183 pgs 87
>>> osd.184 pgs 86
>>> osd.185 pgs 86
>>> osd.186 pgs 87
>>> osd.187 pgs 86
>>> osd.188 pgs 87
>>> osd.189 pgs 43
>>> osd.190 pgs 49
>>> osd.191 pgs 55
>>> osd.192 pgs 62
>>> osd.193 pgs 55
>>> osd.194 pgs 43
>>> osd.195 pgs 62
>>> osd.196 pgs 43
>>> osd.197 pgs 49
>>> osd.198 pgs 62
>>> osd.199 pgs 55
>>> osd.200 pgs 37
>>> osd.201 pgs 55
>>> osd.202 pgs 49
>>> osd.203 pgs 49
>>> osd.204 pgs 62
>>> osd.205 pgs 62
>>> osd.206 pgs 55
>>> osd.207 pgs 49
>>> osd.208 pgs 43
>>> osd.209 pgs 43
>>> osd.210 pgs 43
>>> osd.211 pgs 49
>>> osd.212 pgs 49
>>> osd.213 pgs 49
>>> osd.214 pgs 55
>>> osd.215 pgs 43
>>> osd.216 pgs 49
>>> osd.217 pgs 49
>>> osd.218 pgs 55
>>> osd.219 pgs 62
>>> osd.220 pgs 49
>>> osd.221 pgs 55
>>> osd.222 pgs 49
>>> osd.223 pgs 55
>>> osd.224 pgs 49
>>> osd.225 pgs 55
>>> osd.226 pgs 49
>>> osd.227 pgs 55
>>> osd.228 pgs 62
>>> osd.229 pgs 62
>>> osd.230 pgs 37
>>> osd.231 pgs 55
>>> osd.232 pgs 62
>>> osd.233 pgs 55
>>> osd.234 pgs 55
>>> osd.235 pgs 62
>>>
>>> osd.236 pgs 55
>>> osd.237 pgs 62
>>> osd.238 pgs 61
>>> osd.239 pgs 37
>>> osd.240 pgs 62
>>> osd.241 pgs 62
>>> osd.242 pgs 61
>>> osd.243 pgs 62
>>> osd.244 pgs 62
>>> osd.245 pgs 62
>>> osd.246 pgs 62
>>> osd.247 pgs 55
>>> osd.248 pgs 62
>>> osd.249 pgs 49
>>> osd.250 pgs 61
>>> osd.251 pgs 49

>>> osd.252 pgs 43
>>> osd.253 pgs 55
>>> osd.254 pgs 49
>>> osd.255 pgs 55
>>> osd.256 pgs 43
>>> osd.257 pgs 55
>>> osd.258 pgs 62
>>> osd.259 pgs 61
>>> osd.260 pgs 55
>>> osd.261 pgs 43
>>> osd.262 pgs 43
>>> osd.263 pgs 49
>>> osd.264 pgs 49
>>> osd.265 pgs 62
>>> osd.266 pgs 49
>>> osd.267 pgs 62
>>> osd.268 pgs 87
>>> osd.269 pgs 86
>>> osd.270 pgs 87
>>> osd.271 pgs 86
>>> osd.272 pgs 270
>>> osd.273 pgs 271
>>> osd.274 pgs 270
>>> osd.275 pgs 270
>>> osd.276 pgs 270
>>> osd.277 pgs 270
>>> osd.278 pgs 270
>>> osd.279 pgs 270
>>> osd.280 pgs 270
>>> osd.281 pgs 270
>>> osd.282 pgs 270
>>> osd.283 pgs 271
>>> osd.284 pgs 270
>>> osd.285 pgs 271
>>> osd.286 pgs 270
>>> osd.287 pgs 270
>>> osd.288 pgs 270
>>> osd.289 pgs 270
>>> osd.290 pgs 270
>>> osd.291 pgs 270
>>> osd.292 pgs 270
>>> osd.293 pgs 270
>>> osd.294 pgs 270
>>> osd.295 pgs 270
>>> osd.296 pgs 270
>>> osd.297 pgs 270
>>> osd.298 pgs 270
>>> osd.299 pgs 270
>>> osd.300 pgs 270
>>> osd.301 pgs 270
>>> osd.302 pgs 270
>>> osd.303 pgs 270
>>> osd.304 pgs 270
>>> osd.305 pgs 270
>>> osd.306 pgs 270
>>> osd.307 pgs 270
>>> osd.308 pgs 270
>>> osd.309 pgs 270
>>> osd.310 pgs 270
>>> osd.311 pgs 270
>>> osd.312 pgs 270
>>> osd.313 pgs 270
>>>
>>> osd.314 pgs 270
>>> osd.315 pgs 270
>>> osd.316 pgs 270
>>> osd.317 pgs 270
>>> osd.318 pgs 270
>>> osd.319 pgs 270
>>> osd.320 pgs 271
>>> osd.321 pgs 271
>>> osd.322 pgs 271
>>> osd.323 pgs 271
>>> osd.324 pgs 271
>>> osd.325 pgs 270
>>> osd.326 pgs 270
>>> osd.327 pgs 270
>>> osd.328 pgs 270
>>> osd.329 pgs 270
>>> osd.330 pgs 270
>>> osd.331 pgs 270
>>> osd.332 pgs 271
>>> osd.333 pgs 270
>>> osd.334 pgs 271
>>> osd.335 pgs 270
>>> osd.336 pgs 270
>>> osd.337 pgs 271
>>> osd.338 pgs 271
>>> osd.339 pgs 270
>>> osd.340 pgs 270
>>> osd.341 pgs 270
>>> osd.342 pgs 270
>>> osd.343 pgs 270
>>> osd.344 pgs 270
>>> osd.345 pgs 271
>>> osd.346 pgs 270
>>> osd.347 pgs 271
>>> osd.348 pgs 270
>>> osd.349 pgs 270
>>> osd.350 pgs 270
>>> osd.351 pgs 270
>>> osd.352 pgs 270
>>> osd.353 pgs 270
>>> osd.354 pgs 270
>>> osd.355 pgs 270
>>> osd.356 pgs 270
>>> osd.357 pgs 270
>>> osd.358 pgs 270
>>> osd.359 pgs 270
>>> osd.360 pgs 270
>>> osd.361 pgs 270
>>> osd.362 pgs 270
>>> osd.363 pgs 270
>>> osd.364 pgs 270
>>> osd.365 pgs 270
>>> osd.366 pgs 270
>>> osd.367 pgs 270
>>> osd.368 pgs 86
>>> osd.369 pgs 86
>>> osd.370 pgs 87
>>> osd.371 pgs 270
>>> osd.372 pgs 270
>>> osd.373 pgs 270
>>> osd.374 pgs 270
>>> osd.375 pgs 270
>>> osd.376 pgs 270
>>> osd.377 pgs 270
>>> osd.378 pgs 270
>>> osd.379 pgs 270
>>> osd.380 pgs 270
>>> osd.381 pgs 270
>>> osd.382 pgs 270
>>> osd.383 pgs 270
>>> osd.384 pgs 271
>>> osd.385 pgs 270
>>> osd.386 pgs 270
>>> osd.387 pgs 270
>>> osd.388 pgs 270
>>> osd.389 pgs 270
>>> osd.390 pgs 270
>>> osd.391 pgs 270
>>>
>>> osd.392 pgs 270
>>> osd.393 pgs 270
>>> osd.394 pgs 270
>>> osd.395 pgs 270
>>> osd.396 pgs 270
>>> osd.397 pgs 270
>>> osd.398 pgs 270
>>> osd.399 pgs 270
>>> osd.400 pgs 270
>>> osd.401 pgs 270
>>> osd.402 pgs 270
>>> osd.403 pgs 270
>>> osd.404 pgs 270
>>> osd.405 pgs 270
>>> osd.406 pgs 271
>>> osd.407 pgs 270
>>> osd.408 pgs 270
>>> osd.409 pgs 270
>>> osd.410 pgs 270
>>> osd.411 pgs 270
>>> osd.412 pgs 270
>>> osd.413 pgs 270
>>> osd.414 pgs 270
>>> osd.415 pgs 270
>>> osd.416 pgs 270
>>> osd.417 pgs 270
>>> osd.418 pgs 270
>>> osd.419 pgs 87
>>> osd.420 pgs 87
>>> osd.421 pgs 86
>>> osd.422 pgs 86
>>> osd.423 pgs 86
>>> osd.424 pgs 86
>>> osd.425 pgs 86
>>> osd.426 pgs 86
>>> osd.427 pgs 86
>>> osd.428 pgs 86
>>> osd.429 pgs 86
>>> osd.430 pgs 86
>>> osd.431 pgs 86
>>> osd.432 pgs 86
>>> osd.433 pgs 86
>>> osd.434 pgs 86
>>> osd.435 pgs 58
>>> osd.436 pgs 58
>>> osd.437 pgs 58
>>> osd.438 pgs 58
>>> osd.439 pgs 58
>>> osd.440 pgs 58
>>> osd.441 pgs 58
>>>
>>>
>>> David
>>>
>>> On 12/11/19 6:12 AM, Thomas Schneider wrote:
>>>> Hello David,
>>>>
>>>> I'm experiencing issues with OSD balancing, too.
>>>> My ceph cluster is running on release
>>>> ceph version 14.2.4.1 (596a387fb278758406deabf997735a1f706660c9)
>>>> nautilus (stable)
>>>>
>>>> Would you be able to test (the latest code) on my OSDmap and verify if
>>>> balancing would work?
>>>> I have attached it to this email.
>>>>
>>>> Regards
>>>> Thomas
>>>>
>>>>
>>
>>
>>
>>
>

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux