Re: Degraded objects while OSD is being added/filled

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



​Hello,


I noticed the same behaviour in our cluster. 


ceph version 10.2.7 (50e863e0f4bc8f4b9e31156de690d765af245185)



    cluster 0a9f2d69-5905-4369-81ae-e36e4a791831

     health HEALTH_WARN

            1 pgs backfill_toofull

            4366 pgs backfill_wait

            11 pgs backfilling

            45 pgs degraded

            45 pgs recovery_wait

            45 pgs stuck degraded

            4423 pgs stuck unclean

            recovery 181563/302722835 objects degraded (0.060%)

            recovery 57192879/302722835 objects misplaced (18.893%)

            1 near full osd(s)

            noout,nodeep-scrub flag(s) set

     monmap e3: 3 mons at {0=130.232.243.65:6789/0,1=130.232.243.66:6789/0,2=130.232.243.67:6789/0}

            election epoch 356, quorum 0,1,2 0,1,2

     osdmap e388588: 260 osds: 260 up, 242 in; 4378 remapped pgs

            flags nearfull,noout,nodeep-scrub,require_jewel_osds

      pgmap v80658624: 25728 pgs, 8 pools, 202 TB data, 89212 kobjects

            612 TB used, 300 TB / 912 TB avail

            181563/302722835 objects degraded (0.060%)

            57192879/302722835 objects misplaced (18.893%)

               21301 active+clean

                4366 active+remapped+wait_backfill

                  45 active+recovery_wait+degraded

                  11 active+remapped+backfilling

                   4 active+clean+scrubbing

                   1 active+remapped+backfill_toofull

recovery io 421 MB/s, 155 objects/s

  client io 201 kB/s rd, 2034 B/s wr, 75 op/s rd, 0 op/s wr


I'm currently doing a rolling migration from Puppet on Ubuntu to Ansible on RHEL, and I started with a healthy cluster, evacuated some nodes by setting their weight to 0, removed them from the cluster and re-added them with ansible playbook.

Basically I ran 

        ceph osd crush remove osd.$num

        ceph osd rm $num

        ceph auth del osd.$num


in a loop for the osds I was replacing, and then let the ansible ceph-osd playbook to bring the host back to the cluster. Crushmap is attached.

-- 
  Eino Tuominen 



From: ceph-users <ceph-users-bounces@xxxxxxxxxxxxxx> on behalf of Gregory Farnum <gfarnum@xxxxxxxxxx>
Sent: Friday, June 30, 2017 23:38
To: Andras Pataki; ceph-users
Subject: Re: [ceph-users] Degraded objects while OSD is being added/filled
 
On Wed, Jun 21, 2017 at 6:57 AM Andras Pataki <apataki@xxxxxxxxxxxxxxxxxxxxx> wrote:
Hi cephers,

I noticed something I don't understand about ceph's behavior when adding an OSD.  When I start with a clean cluster (all PG's active+clean) and add an OSD (via ceph-deploy for example), the crush map gets updated and PGs get reassigned to different OSDs, and the new OSD starts getting filled with data.  As the new OSD gets filled, I start seeing PGs in degraded states.  Here is an example:

      pgmap v52068792: 42496 pgs, 6 pools, 1305 TB data, 390 Mobjects
            3164 TB used, 781 TB / 3946 TB avail
            8017/994261437 objects degraded (0.001%)
            2220581/994261437 objects misplaced (0.223%)
               42393 active+clean
                  91 active+remapped+wait_backfill
                   9 active+clean+scrubbing+deep
                   1 active+recovery_wait+degraded
                   1 active+clean+scrubbing
                   1 active+remapped+backfilling

Any ideas why there would be any persistent degradation in the cluster while the newly added drive is being filled?  It takes perhaps a day or two to fill the drive - and during all this time the cluster seems to be running degraded.  As data is written to the cluster, the number of degraded objects increases over time.  Once the newly added OSD is filled, the cluster comes back to clean again.

Here is the PG that is degraded in this picture:

7.87c    1    0    2    0    0    4194304    7    7    active+recovery_wait+degraded    2017-06-20 14:12:44.119921    344610'7    583572:2797    [402,521]    402    [402,521]    402    344610'7    2017-06-16 06:04:55.822503    344610'7    2017-06-16 06:04:55.822503

The newly added osd here is 521.  Before it got added, this PG had two replicas clean, but one got forgotten somehow?

This sounds a bit concerning at first glance. Can you provide some output of exactly what commands you're invoking, and the "ceph -s" output as it changes in response?

I really don't see how adding a new OSD can result in it "forgetting" about existing valid copies — it's definitely not supposed to — so I wonder if there's a collision in how it's deciding to remove old locations.

Are you running with only two copies of your data? It shouldn't matter but there could also be errors resulting in a behavioral difference between two and three copies.
-Greg
 

Other remapped PGs have 521 in their "up" set but still have the two existing copies in their "acting" set - and no degradation is shown.  Examples:

2.f24    14282    0    16    28564    0    51014850801    3102    3102    active+remapped+wait_backfill    2017-06-20 14:12:42.650308    583553'2033479    583573:2033266    [467,521]    467    [467,499]    467    582430'2033337    2017-06-16 09:08:51.055131    582036'2030837    2017-05-31 20:37:54.831178
6.2b7d    10499    0    140    20998    0    37242874687    3673    3673    active+remapped+wait_backfill    2017-06-20 14:12:42.070019    583569'165163    583572:342128    [541,37,521]    541    [541,37,532]    541    582430'161890    2017-06-18 09:42:49.148402    582430'161890    2017-06-18 09:42:49.148402

We are running the latest Jewel patch level everywhere (10.2.7).  Any insights would be appreciated.

Andras

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
# begin crush map
tunable choose_local_tries 0
tunable choose_local_fallback_tries 0
tunable choose_total_tries 50
tunable chooseleaf_descend_once 1
tunable chooseleaf_vary_r 1
tunable straw_calc_version 1

# devices
device 0 osd.0
device 1 osd.1
device 2 osd.2
device 3 osd.3
device 4 osd.4
device 5 osd.5
device 6 osd.6
device 7 osd.7
device 8 osd.8
device 9 osd.9
device 10 osd.10
device 11 osd.11
device 12 osd.12
device 13 osd.13
device 14 osd.14
device 15 osd.15
device 16 osd.16
device 17 osd.17
device 18 osd.18
device 19 osd.19
device 20 osd.20
device 21 osd.21
device 22 osd.22
device 23 osd.23
device 24 osd.24
device 25 osd.25
device 26 osd.26
device 27 osd.27
device 28 osd.28
device 29 osd.29
device 30 osd.30
device 31 osd.31
device 32 osd.32
device 33 osd.33
device 34 osd.34
device 35 osd.35
device 36 osd.36
device 37 osd.37
device 38 osd.38
device 39 osd.39
device 40 osd.40
device 41 osd.41
device 42 osd.42
device 43 osd.43
device 44 osd.44
device 45 osd.45
device 46 osd.46
device 47 osd.47
device 48 osd.48
device 49 osd.49
device 50 osd.50
device 51 osd.51
device 52 osd.52
device 53 osd.53
device 54 osd.54
device 55 osd.55
device 56 osd.56
device 57 osd.57
device 58 osd.58
device 59 osd.59
device 60 osd.60
device 61 osd.61
device 62 osd.62
device 63 osd.63
device 64 osd.64
device 65 osd.65
device 66 osd.66
device 67 osd.67
device 68 osd.68
device 69 osd.69
device 70 osd.70
device 71 osd.71
device 72 osd.72
device 73 osd.73
device 74 osd.74
device 75 osd.75
device 76 osd.76
device 77 osd.77
device 78 osd.78
device 79 osd.79
device 80 osd.80
device 81 osd.81
device 82 osd.82
device 83 osd.83
device 84 osd.84
device 85 osd.85
device 86 osd.86
device 87 osd.87
device 88 osd.88
device 89 osd.89
device 90 osd.90
device 91 osd.91
device 92 osd.92
device 93 osd.93
device 94 osd.94
device 95 osd.95
device 96 osd.96
device 97 osd.97
device 98 osd.98
device 99 osd.99
device 100 osd.100
device 101 osd.101
device 102 osd.102
device 103 osd.103
device 104 osd.104
device 105 osd.105
device 106 osd.106
device 107 osd.107
device 108 osd.108
device 109 osd.109
device 110 osd.110
device 111 osd.111
device 112 osd.112
device 113 osd.113
device 114 osd.114
device 115 osd.115
device 116 osd.116
device 117 osd.117
device 118 osd.118
device 119 osd.119
device 120 osd.120
device 121 osd.121
device 122 osd.122
device 123 osd.123
device 124 osd.124
device 125 osd.125
device 126 osd.126
device 127 osd.127
device 128 osd.128
device 129 osd.129
device 130 osd.130
device 131 osd.131
device 132 osd.132
device 133 osd.133
device 134 osd.134
device 135 osd.135
device 136 osd.136
device 137 osd.137
device 138 osd.138
device 139 osd.139
device 140 osd.140
device 141 osd.141
device 142 osd.142
device 143 osd.143
device 144 osd.144
device 145 osd.145
device 146 osd.146
device 147 osd.147
device 148 osd.148
device 149 osd.149
device 150 osd.150
device 151 osd.151
device 152 osd.152
device 153 osd.153
device 154 osd.154
device 155 osd.155
device 156 osd.156
device 157 osd.157
device 158 osd.158
device 159 osd.159
device 160 osd.160
device 161 osd.161
device 162 osd.162
device 163 osd.163
device 164 osd.164
device 165 osd.165
device 166 osd.166
device 167 osd.167
device 168 osd.168
device 169 osd.169
device 170 osd.170
device 171 osd.171
device 172 osd.172
device 173 osd.173
device 174 osd.174
device 175 osd.175
device 176 osd.176
device 177 osd.177
device 178 osd.178
device 179 osd.179
device 180 osd.180
device 181 osd.181
device 182 osd.182
device 183 osd.183
device 184 osd.184
device 185 osd.185
device 186 osd.186
device 187 osd.187
device 188 osd.188
device 189 osd.189
device 190 osd.190
device 191 osd.191
device 192 osd.192
device 193 osd.193
device 194 osd.194
device 195 osd.195
device 196 osd.196
device 197 osd.197
device 198 osd.198
device 199 osd.199
device 200 osd.200
device 201 osd.201
device 202 osd.202
device 203 osd.203
device 204 osd.204
device 205 osd.205
device 206 osd.206
device 207 osd.207
device 208 osd.208
device 209 osd.209
device 210 osd.210
device 211 osd.211
device 212 osd.212
device 213 osd.213
device 214 osd.214
device 215 osd.215
device 216 osd.216
device 217 osd.217
device 218 osd.218
device 219 osd.219
device 220 osd.220
device 221 osd.221
device 222 osd.222
device 223 osd.223
device 224 osd.224
device 225 osd.225
device 226 osd.226
device 227 osd.227
device 228 osd.228
device 229 osd.229
device 230 osd.230
device 231 osd.231
device 232 osd.232
device 233 osd.233
device 234 osd.234
device 235 osd.235
device 236 osd.236
device 237 osd.237
device 238 osd.238
device 239 osd.239
device 240 osd.240
device 241 osd.241
device 242 osd.242
device 243 osd.243
device 244 osd.244
device 245 osd.245
device 246 osd.246
device 247 osd.247
device 248 osd.248
device 249 osd.249
device 250 osd.250
device 251 osd.251
device 252 osd.252
device 253 osd.253
device 254 osd.254
device 255 osd.255
device 256 osd.256
device 257 osd.257
device 258 osd.258
device 259 osd.259

# types
type 0 osd
type 1 host
type 2 chassis
type 3 rack
type 4 row
type 5 pdu
type 6 pod
type 7 room
type 8 datacenter
type 9 region
type 10 root

# buckets
host ceph-osd-05 {
	id -9		# do not change unnecessarily
	# weight 27.179
	alg straw
	hash 0	# rjenkins1
	item osd.11 weight 2.718
	item osd.16 weight 2.718
	item osd.21 weight 2.718
	item osd.23 weight 2.718
	item osd.26 weight 2.718
	item osd.32 weight 2.718
	item osd.59 weight 2.718
	item osd.66 weight 2.718
	item osd.79 weight 2.718
	item osd.87 weight 2.718
}
host ceph-osd-08 {
	id -2		# do not change unnecessarily
	# weight 27.179
	alg straw
	hash 0	# rjenkins1
	item osd.6 weight 2.718
	item osd.7 weight 2.718
	item osd.29 weight 2.718
	item osd.37 weight 2.718
	item osd.38 weight 2.718
	item osd.43 weight 2.718
	item osd.47 weight 2.718
	item osd.58 weight 2.718
	item osd.74 weight 2.718
	item osd.80 weight 2.718
}
host ceph-osd-11 {
	id -11		# do not change unnecessarily
	# weight 27.179
	alg straw
	hash 0	# rjenkins1
	item osd.88 weight 2.718
	item osd.92 weight 2.718
	item osd.93 weight 2.718
	item osd.94 weight 2.718
	item osd.95 weight 2.718
	item osd.97 weight 2.718
	item osd.98 weight 2.718
	item osd.99 weight 2.718
	item osd.100 weight 2.718
	item osd.101 weight 2.718
}
host ceph-osd-14 {
	id -19		# do not change unnecessarily
	# weight 27.300
	alg straw
	hash 0	# rjenkins1
	item osd.130 weight 2.730
	item osd.131 weight 2.730
	item osd.133 weight 2.730
	item osd.134 weight 2.730
	item osd.135 weight 2.730
	item osd.146 weight 2.730
	item osd.147 weight 2.730
	item osd.148 weight 2.730
	item osd.132 weight 2.730
	item osd.149 weight 2.730
}
host ceph-osd-17 {
	id -23		# do not change unnecessarily
	# weight 54.600
	alg straw
	hash 0	# rjenkins1
	item osd.159 weight 5.460
	item osd.160 weight 5.460
	item osd.161 weight 5.460
	item osd.162 weight 5.460
	item osd.163 weight 5.460
	item osd.164 weight 5.460
	item osd.165 weight 5.460
	item osd.166 weight 5.460
	item osd.167 weight 5.460
	item osd.179 weight 5.460
}
host ceph-osd-20 {
	id -24		# do not change unnecessarily
	# weight 54.600
	alg straw
	hash 0	# rjenkins1
	item osd.190 weight 5.460
	item osd.191 weight 5.460
	item osd.192 weight 5.460
	item osd.193 weight 5.460
	item osd.194 weight 5.460
	item osd.195 weight 5.460
	item osd.196 weight 5.460
	item osd.197 weight 5.460
	item osd.198 weight 5.460
	item osd.199 weight 5.460
}
host ceph-osd-23 {
	id -28		# do not change unnecessarily
	# weight 108.926
	alg straw
	hash 0	# rjenkins1
	item osd.230 weight 5.446
	item osd.232 weight 5.446
	item osd.234 weight 5.446
	item osd.236 weight 5.446
	item osd.237 weight 5.446
	item osd.239 weight 5.446
	item osd.240 weight 5.446
	item osd.241 weight 5.446
	item osd.242 weight 5.446
	item osd.243 weight 5.446
	item osd.244 weight 5.446
	item osd.245 weight 5.446
	item osd.246 weight 5.446
	item osd.247 weight 5.446
	item osd.248 weight 5.446
	item osd.249 weight 5.446
	item osd.250 weight 5.446
	item osd.251 weight 5.446
	item osd.252 weight 5.446
	item osd.253 weight 5.446
}
host ceph-osd-02 {
	id -6		# do not change unnecessarily
	# weight 27.179
	alg straw
	hash 0	# rjenkins1
	item osd.2 weight 2.718
	item osd.17 weight 2.718
	item osd.28 weight 2.718
	item osd.35 weight 2.718
	item osd.41 weight 2.718
	item osd.48 weight 2.718
	item osd.51 weight 2.718
	item osd.57 weight 2.718
	item osd.76 weight 2.718
	item osd.84 weight 2.718
}
datacenter edu {
	id -12		# do not change unnecessarily
	# weight 354.141
	alg straw
	hash 0	# rjenkins1
	item ceph-osd-05 weight 27.179
	item ceph-osd-08 weight 27.179
	item ceph-osd-11 weight 27.179
	item ceph-osd-14 weight 27.300
	item ceph-osd-17 weight 54.600
	item ceph-osd-20 weight 54.600
	item ceph-osd-23 weight 108.926
	item ceph-osd-02 weight 27.179
}
host ceph-osd-06 {
	id -7		# do not change unnecessarily
	# weight 27.179
	alg straw
	hash 0	# rjenkins1
	item osd.12 weight 2.718
	item osd.18 weight 2.718
	item osd.20 weight 2.718
	item osd.24 weight 2.718
	item osd.27 weight 2.718
	item osd.42 weight 2.718
	item osd.61 weight 2.718
	item osd.78 weight 2.718
	item osd.86 weight 2.718
	item osd.90 weight 2.718
}
host ceph-osd-09 {
	id -3		# do not change unnecessarily
	# weight 27.179
	alg straw
	hash 0	# rjenkins1
	item osd.4 weight 2.718
	item osd.5 weight 2.718
	item osd.45 weight 2.718
	item osd.55 weight 2.718
	item osd.65 weight 2.718
	item osd.68 weight 2.718
	item osd.69 weight 2.718
	item osd.73 weight 2.718
	item osd.77 weight 2.718
	item osd.91 weight 2.718
}
host ceph-osd-12 {
	id -13		# do not change unnecessarily
	# weight 27.179
	alg straw
	hash 0	# rjenkins1
	item osd.110 weight 2.718
	item osd.111 weight 2.718
	item osd.112 weight 2.718
	item osd.113 weight 2.718
	item osd.114 weight 2.718
	item osd.115 weight 2.718
	item osd.116 weight 2.718
	item osd.117 weight 2.718
	item osd.118 weight 2.718
	item osd.119 weight 2.718
}
host ceph-osd-15 {
	id -17		# do not change unnecessarily
	# weight 24.570
	alg straw
	hash 0	# rjenkins1
	item osd.137 weight 2.730
	item osd.138 weight 2.730
	item osd.139 weight 2.730
	item osd.140 weight 2.730
	item osd.141 weight 2.730
	item osd.143 weight 2.730
	item osd.144 weight 2.730
	item osd.145 weight 2.730
	item osd.211 weight 2.730
}
host ceph-osd-18 {
	id -25		# do not change unnecessarily
	# weight 54.600
	alg straw
	hash 0	# rjenkins1
	item osd.168 weight 5.460
	item osd.169 weight 5.460
	item osd.170 weight 5.460
	item osd.171 weight 5.460
	item osd.172 weight 5.460
	item osd.173 weight 5.460
	item osd.174 weight 5.460
	item osd.175 weight 5.460
	item osd.176 weight 5.460
	item osd.178 weight 5.460
}
host ceph-osd-21 {
	id -26		# do not change unnecessarily
	# weight 54.600
	alg straw
	hash 0	# rjenkins1
	item osd.200 weight 5.460
	item osd.201 weight 5.460
	item osd.202 weight 5.460
	item osd.203 weight 5.460
	item osd.204 weight 5.460
	item osd.205 weight 5.460
	item osd.206 weight 5.460
	item osd.207 weight 5.460
	item osd.208 weight 5.460
	item osd.209 weight 5.460
}
host ceph-osd-24 {
	id -29		# do not change unnecessarily
	# weight 59.909
	alg straw
	hash 0	# rjenkins1
	item osd.227 weight 5.446
	item osd.231 weight 5.446
	item osd.233 weight 5.446
	item osd.235 weight 5.446
	item osd.238 weight 5.446
	item osd.254 weight 5.446
	item osd.255 weight 5.446
	item osd.256 weight 5.446
	item osd.257 weight 5.446
	item osd.258 weight 5.446
	item osd.259 weight 5.446
}
host ceph-osd-03 {
	id -5		# do not change unnecessarily
	# weight 27.179
	alg straw
	hash 0	# rjenkins1
	item osd.3 weight 2.718
	item osd.9 weight 2.718
	item osd.33 weight 2.718
	item osd.36 weight 2.718
	item osd.44 weight 2.718
	item osd.46 weight 2.718
	item osd.56 weight 2.718
	item osd.60 weight 2.718
	item osd.75 weight 2.718
	item osd.83 weight 2.718
}
datacenter ham {
	id -14		# do not change unnecessarily
	# weight 302.395
	alg straw
	hash 0	# rjenkins1
	item ceph-osd-06 weight 27.179
	item ceph-osd-09 weight 27.179
	item ceph-osd-12 weight 27.179
	item ceph-osd-15 weight 24.570
	item ceph-osd-18 weight 54.600
	item ceph-osd-21 weight 54.600
	item ceph-osd-24 weight 59.909
	item ceph-osd-03 weight 27.179
}
host ceph-osd-04 {
	id -10		# do not change unnecessarily
	# weight 27.179
	alg straw
	hash 0	# rjenkins1
	item osd.1 weight 2.718
	item osd.15 weight 2.718
	item osd.19 weight 2.718
	item osd.22 weight 2.718
	item osd.25 weight 2.718
	item osd.31 weight 2.718
	item osd.53 weight 2.718
	item osd.64 weight 2.718
	item osd.71 weight 2.718
	item osd.82 weight 2.718
}
host ceph-osd-07 {
	id -8		# do not change unnecessarily
	# weight 27.179
	alg straw
	hash 0	# rjenkins1
	item osd.0 weight 2.718
	item osd.13 weight 2.718
	item osd.14 weight 2.718
	item osd.34 weight 2.718
	item osd.54 weight 2.718
	item osd.63 weight 2.718
	item osd.67 weight 2.718
	item osd.70 weight 2.718
	item osd.85 weight 2.718
	item osd.89 weight 2.718
}
host ceph-osd-10 {
	id -16		# do not change unnecessarily
	# weight 27.179
	alg straw
	hash 0	# rjenkins1
	item osd.120 weight 2.718
	item osd.121 weight 2.718
	item osd.122 weight 2.718
	item osd.123 weight 2.718
	item osd.124 weight 2.718
	item osd.125 weight 2.718
	item osd.126 weight 2.718
	item osd.127 weight 2.718
	item osd.128 weight 2.718
	item osd.129 weight 2.718
}
host ceph-osd-13 {
	id -20		# do not change unnecessarily
	# weight 27.300
	alg straw
	hash 0	# rjenkins1
	item osd.102 weight 2.730
	item osd.103 weight 2.730
	item osd.104 weight 2.730
	item osd.106 weight 2.730
	item osd.107 weight 2.730
	item osd.108 weight 2.730
	item osd.109 weight 2.730
	item osd.52 weight 2.730
	item osd.96 weight 2.730
	item osd.105 weight 2.730
}
host ceph-osd-16 {
	id -21		# do not change unnecessarily
	# weight 54.600
	alg straw
	hash 0	# rjenkins1
	item osd.155 weight 5.460
	item osd.150 weight 5.460
	item osd.151 weight 5.460
	item osd.152 weight 5.460
	item osd.153 weight 5.460
	item osd.154 weight 5.460
	item osd.156 weight 5.460
	item osd.157 weight 5.460
	item osd.158 weight 5.460
	item osd.177 weight 5.460
}
host ceph-osd-19 {
	id -22		# do not change unnecessarily
	# weight 54.600
	alg straw
	hash 0	# rjenkins1
	item osd.182 weight 5.460
	item osd.180 weight 5.460
	item osd.181 weight 5.460
	item osd.183 weight 5.460
	item osd.184 weight 5.460
	item osd.185 weight 5.460
	item osd.186 weight 5.460
	item osd.187 weight 5.460
	item osd.188 weight 5.460
	item osd.189 weight 5.460
}
host ceph-osd-22 {
	id -27		# do not change unnecessarily
	# weight 108.926
	alg straw
	hash 0	# rjenkins1
	item osd.136 weight 5.446
	item osd.142 weight 5.446
	item osd.210 weight 5.446
	item osd.212 weight 5.446
	item osd.213 weight 5.446
	item osd.214 weight 5.446
	item osd.215 weight 5.446
	item osd.216 weight 5.446
	item osd.217 weight 5.446
	item osd.218 weight 5.446
	item osd.219 weight 5.446
	item osd.220 weight 5.446
	item osd.221 weight 5.446
	item osd.222 weight 5.446
	item osd.223 weight 5.446
	item osd.224 weight 5.446
	item osd.225 weight 5.446
	item osd.226 weight 5.446
	item osd.228 weight 5.446
	item osd.229 weight 5.446
}
host ceph-osd-01 {
	id -4		# do not change unnecessarily
	# weight 27.179
	alg straw
	hash 0	# rjenkins1
	item osd.8 weight 2.718
	item osd.10 weight 2.718
	item osd.30 weight 2.718
	item osd.39 weight 2.718
	item osd.40 weight 2.718
	item osd.49 weight 2.718
	item osd.50 weight 2.718
	item osd.62 weight 2.718
	item osd.72 weight 2.718
	item osd.81 weight 2.718
}
datacenter rai {
	id -15		# do not change unnecessarily
	# weight 354.141
	alg straw
	hash 0	# rjenkins1
	item ceph-osd-04 weight 27.179
	item ceph-osd-07 weight 27.179
	item ceph-osd-10 weight 27.179
	item ceph-osd-13 weight 27.300
	item ceph-osd-16 weight 54.600
	item ceph-osd-19 weight 54.600
	item ceph-osd-22 weight 108.926
	item ceph-osd-01 weight 27.179
}
root default {
	id -1		# do not change unnecessarily
	# weight 1010.678
	alg straw
	hash 0	# rjenkins1
	item edu weight 354.141
	item ham weight 302.395
	item rai weight 354.141
}

# rules
rule replicated_ruleset {
	ruleset 0
	type replicated
	min_size 1
	max_size 10
	step take default
	step chooseleaf firstn 0 type datacenter
	step emit
}
rule erasure-code {
	ruleset 1
	type erasure
	min_size 3
	max_size 20
	step set_chooseleaf_tries 5
	step take default
	step chooseleaf indep 0 type datacenter
	step emit
}
rule ecpool {
	ruleset 2
	type erasure
	min_size 3
	max_size 20
	step set_chooseleaf_tries 5
	step take default
	step chooseleaf indep 0 type datacenter
	step emit
}

# end crush map
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux