Re: RESEND: Re: PG Balancer Upmap mode not working

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello David,

I'm experiencing issues with OSD balancing, too.
My ceph cluster is running on release
ceph version 14.2.4.1 (596a387fb278758406deabf997735a1f706660c9)
nautilus (stable)

Would you be able to test (the latest code) on my OSDmap and verify if
balancing would work?
I have attached it to this email.

Regards
Thomas

Am 11.12.2019 um 02:36 schrieb David Zafman:
>
> Rich,
>
> Using your OSDMap, the code in https://github.com/ceph/ceph/pull/31992
> and some additional changes to osdmaptool I was able to balance your
> cluster.  The osdmaptool changes simulate the mgr active balancer
> behavior.  It never took more than 0.388674 seconds to calculate more
> upmaps. And that's on a virtual machine used for development.  It took
> 42 rounds with 10 maximum upmaps created per round.  With the default
> 1 minute sleeps inside the mgr it would take 42 minutes.  It needed
> 409 additional upmaps and removed 6.
>
> Each pool is balanced individually so the final result is slightly
> less than perfect (+ or - 1 pg away from weighted target).
>
> Final layout:
>
> osd.0 pgs 107
> osd.1 pgs 106
> osd.2 pgs 106
> osd.3 pgs 106
> osd.4 pgs 106
> osd.5 pgs 106
> osd.6 pgs 107
> osd.7 pgs 105
> osd.8 pgs 106
> osd.9 pgs 105
> osd.10 pgs 105
> osd.11 pgs 106
> osd.12 pgs 107
> osd.13 pgs 107
> osd.14 pgs 106
> osd.15 pgs 106
> osd.16 pgs 107
> osd.17 pgs 106
> osd.18 pgs 106
> osd.19 pgs 105
> osd.20 pgs 106
> osd.21 pgs 106
> osd.22 pgs 106
> osd.23 pgs 106
> osd.24 pgs 107
> osd.25 pgs 106
> osd.26 pgs 107
> osd.28 pgs 106
> osd.29 pgs 106
> osd.30 pgs 106
> osd.31 pgs 106
> osd.32 pgs 106
> osd.33 pgs 106
> osd.34 pgs 106
> osd.35 pgs 106
> osd.36 pgs 108
> osd.37 pgs 106
> osd.38 pgs 107
> osd.39 pgs 106
> osd.40 pgs 106
> osd.41 pgs 106
> osd.42 pgs 106
> osd.43 pgs 106
> osd.44 pgs 106
> osd.45 pgs 106
> osd.46 pgs 106
> osd.47 pgs 106
> osd.48 pgs 108
> osd.49 pgs 106
> osd.50 pgs 107
> osd.51 pgs 106
> osd.52 pgs 106
> osd.53 pgs 106
> osd.54 pgs 106
> osd.55 pgs 106
> osd.56 pgs 106
> osd.57 pgs 106
> osd.58 pgs 106
> osd.59 pgs 106
> osd.60 pgs 107
> osd.61 pgs 106
> osd.62 pgs 106
> osd.63 pgs 106
> osd.64 pgs 106
> osd.65 pgs 106
> osd.66 pgs 106
> osd.67 pgs 106
> osd.68 pgs 106
> osd.69 pgs 106
> osd.70 pgs 106
> osd.71 pgs 106
> osd.72 pgs 106
> osd.73 pgs 104
> osd.74 pgs 106
>
> osd.75 pgs 105
> osd.76 pgs 106
> osd.77 pgs 106
> osd.78 pgs 106
> osd.79 pgs 106
> osd.80 pgs 105
> osd.81 pgs 105
> osd.82 pgs 106
> osd.83 pgs 106
> osd.84 pgs 104
> osd.85 pgs 104
> osd.86 pgs 106
> osd.87 pgs 105
> osd.88 pgs 104
> osd.89 pgs 106
> osd.90 pgs 105
> osd.91 pgs 104
> osd.92 pgs 105
> osd.93 pgs 106
> osd.94 pgs 106
> osd.95 pgs 106
> osd.96 pgs 106
> osd.97 pgs 106
> osd.98 pgs 106
> osd.99 pgs 105
> osd.100 pgs 106
> osd.101 pgs 106
> osd.102 pgs 106
> osd.103 pgs 106
> osd.104 pgs 106
> osd.105 pgs 106
> osd.106 pgs 106
> osd.107 pgs 106
> osd.108 pgs 105
> osd.109 pgs 106
> osd.110 pgs 105
> osd.111 pgs 105
> osd.112 pgs 105
> osd.113 pgs 105
> osd.114 pgs 106
> osd.115 pgs 105
> osd.116 pgs 105
> osd.117 pgs 104
> osd.118 pgs 106
> osd.119 pgs 105
> osd.120 pgs 105
> osd.121 pgs 105
> osd.122 pgs 106
> osd.123 pgs 106
> osd.124 pgs 106
> osd.125 pgs 105
> osd.126 pgs 104
> osd.127 pgs 105
> osd.128 pgs 106
> osd.129 pgs 104
> osd.130 pgs 106
> osd.131 pgs 106
> osd.132 pgs 105
> osd.133 pgs 106
> osd.134 pgs 105
> osd.135 pgs 106
> osd.136 pgs 105
> osd.137 pgs 105
> osd.138 pgs 104
> osd.139 pgs 105
> osd.140 pgs 105
> osd.141 pgs 105
> osd.142 pgs 105
> osd.143 pgs 105
> osd.144 pgs 105
> osd.145 pgs 105
> osd.146 pgs 105
> osd.147 pgs 105
> osd.148 pgs 105
> osd.149 pgs 105
> osd.150 pgs 105
> osd.151 pgs 105
> osd.152 pgs 105
>
> osd.153 pgs 105
> osd.154 pgs 105
> osd.155 pgs 105
> osd.156 pgs 143
> osd.157 pgs 143
> osd.158 pgs 143
> osd.159 pgs 142
> osd.160 pgs 143
> osd.161 pgs 142
> osd.162 pgs 141
> osd.163 pgs 140
> osd.164 pgs 140
> osd.165 pgs 140
> osd.166 pgs 141
> osd.167 pgs 141
> osd.168 pgs 143
> osd.169 pgs 143
> osd.170 pgs 143
> osd.171 pgs 141
> osd.172 pgs 141
> osd.173 pgs 142
> osd.174 pgs 140
> osd.175 pgs 141
> osd.176 pgs 141
> osd.177 pgs 140
> osd.178 pgs 140
> osd.179 pgs 140
> osd.180 pgs 140
> osd.181 pgs 140
> osd.182 pgs 140
> osd.183 pgs 140
> osd.184 pgs 141
> osd.185 pgs 141
> osd.186 pgs 140
> osd.187 pgs 140
> osd.188 pgs 140
> osd.189 pgs 140
> osd.190 pgs 141
> osd.191 pgs 140
> osd.192 pgs 143
> osd.193 pgs 143
> osd.194 pgs 143
> osd.195 pgs 143
> osd.196 pgs 142
> osd.197 pgs 142
> osd.198 pgs 140
> osd.199 pgs 141
> osd.200 pgs 140
> osd.201 pgs 140
> osd.202 pgs 141
> osd.203 pgs 141
> osd.204 pgs 141
> osd.205 pgs 140
> osd.206 pgs 140
> osd.207 pgs 140
> osd.208 pgs 140
> osd.209 pgs 140
> osd.210 pgs 140
> osd.211 pgs 140
> osd.212 pgs 140
> osd.213 pgs 141
> osd.214 pgs 140
> osd.215 pgs 140
>
> David
>
> On 12/10/19 5:11 PM, Rich Bade wrote:
>> Thanks David, I've sent it to you directly.
>>
>> Rich
>> _______________________________________________
>> ceph-users mailing list -- ceph-users@xxxxxxx
>> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>>
>>
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux