3 node setup with pools size=3

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I am still playing around with a small setup using 3 Nodes, each running 4 OSDs (=12 OSDs).

 

When using a pool size of 3, I get the following behavior when one OSD fails:

 

* the affected PGs get marked active+degraded

* there is no data movement/backfill

 

Note: using 'ceph osd crush tunables optimal‘

 

2014-01-13 12:04:30.983293 mon.0 10.10.10.201:6789/0 278 : [INF] osd.0 marked itself down
2014-01-13 12:04:31.071366 mon.0 10.10.10.201:6789/0 279 : [INF] osdmap e1109: 12 osds: 11 up, 12 in
2014-01-13 12:04:31.208211 mon.0 10.10.10.201:6789/0 280 : [INF] pgmap v40630: 512 pgs: 458 active+clean, 54 stale+active+clean; 0 bytes data, 43026 MB used, 44648 GB / 44690 GB avail
2014-01-13 12:04:32.184767 mon.0 10.10.10.201:6789/0 281 : [INF] osdmap e1110: 12 osds: 11 up, 12 in
2014-01-13 12:04:32.274588 mon.0 10.10.10.201:6789/0 282 : [INF] pgmap v40631: 512 pgs: 458 active+clean, 54 stale+active+clean; 0 bytes data, 43026 MB used, 44648 GB / 44690 GB avail
2014-01-13 12:04:35.869090 mon.0 10.10.10.201:6789/0 283 : [INF] pgmap v40632: 512 pgs: 458 active+clean, 54 stale+active+clean; 0 bytes data, 35358 MB used, 44655 GB / 44690 GB avail
2014-01-13 12:04:36.918869 mon.0 10.10.10.201:6789/0 284 : [INF] pgmap v40633: 512 pgs: 406 active+clean, 21 stale+active+clean, 85 active+degraded; 0 bytes data, 14550 MB used, 44676 GB / 44690 GB avail
2014-01-13 12:04:38.013886 mon.0 10.10.10.201:6789/0 285 : [INF] pgmap v40634: 512 pgs: 381 active+clean, 131 active+degraded; 0 bytes data, 4903 MB used, 44685 GB / 44690 GB avail
2014-01-13 12:06:35.971039 mon.0 10.10.10.201:6789/0 286 : [INF] pgmap v40635: 512 pgs: 381 active+clean, 131 active+degraded; 0 bytes data, 4903 MB used, 44685 GB / 44690 GB avail
2014-01-13 12:06:37.054701 mon.0 10.10.10.201:6789/0 287 : [INF] pgmap v40636: 512 pgs: 381 active+clean, 131 active+degraded; 0 bytes data, 4903 MB used, 44685 GB / 44690 GB avail

2014-01-13 12:09:35.336782 mon.0 10.10.10.201:6789/0 288 : [INF] osd.0 out (down for 304.265855)
2014-01-13 12:09:35.367765 mon.0 10.10.10.201:6789/0 289 : [INF] osdmap e1111: 12 osds: 11 up, 11 in
2014-01-13 12:09:35.441982 mon.0 10.10.10.201:6789/0 290 : [INF] pgmap v40637: 512 pgs: 381 active+clean, 131 active+degraded; 0 bytes data, 836 MB used, 40965 GB / 40966 GB avail
2014-01-13 12:09:36.481087 mon.0 10.10.10.201:6789/0 291 : [INF] osdmap e1112: 12 osds: 11 up, 11 in
2014-01-13 12:09:36.555526 mon.0 10.10.10.201:6789/0 292 : [INF] pgmap v40638: 512 pgs: 381 active+clean, 131 active+degraded; 0 bytes data, 836 MB used, 40965 GB / 40966 GB avail
2014-01-13 12:09:37.582968 mon.0 10.10.10.201:6789/0 293 : [INF] osdmap e1113: 12 osds: 11 up, 11 in
2014-01-13 12:09:37.677104 mon.0 10.10.10.201:6789/0 294 : [INF] pgmap v40639: 512 pgs: 381 active+clean, 131 active+degraded; 0 bytes data, 836 MB used, 40965 GB / 40966 GB avail
2014-01-13 12:09:40.898194 mon.0 10.10.10.201:6789/0 295 : [INF] pgmap v40640: 512 pgs: 392 active+clean, 120 active+degraded; 0 bytes data, 837 MB used, 40965 GB / 40966 GB avail
2014-01-13 12:09:41.940257 mon.0 10.10.10.201:6789/0 296 : [INF] pgmap v40641: 512 pgs: 419 active+clean, 3 active+remapped, 90 active+degraded; 0 bytes data, 838 MB used, 40965 GB / 40966 GB avail
2014-01-13 12:09:43.044860 mon.0 10.10.10.201:6789/0 297 : [INF] pgmap v40642: 512 pgs: 453 active+clean, 13 active+remapped, 46 active+degraded; 0 bytes data, 839 MB used, 40965 GB / 40966 GB avail

2014-01-13 12:11:40.923450 mon.0 10.10.10.201:6789/0 298 : [INF] pgmap v40643: 512 pgs: 453 active+clean, 13 active+remapped, 46 active+degraded; 0 bytes data, 839 MB used, 40965 GB / 40966 GB avail
2014-01-13 12:11:42.007022 mon.0 10.10.10.201:6789/0 299 : [INF] pgmap v40644: 512 pgs: 453 active+clean, 13 active+remapped, 46 active+degraded; 0 bytes data, 838 MB used, 40965 GB / 40966 GB avail
2014-01-13 12:11:43.082319 mon.0 10.10.10.201:6789/0 300 : [INF] pgmap v40645: 512 pgs: 453 active+clean, 13 active+remapped, 46 active+degraded; 0 bytes data, 838 MB used, 40965 GB / 40966 GB avail

 

The PGs remain in degraded stat now – but why (data should move to remaining OSDs)?

 

The failed osd.0 is still in the crush map. If I remove the osd.0 I get ‘active+clean’ state.

 

2014-01-13 12:18:48.663686 mon.0 10.10.10.201:6789/0 303 : [INF] osdmap e1115: 11 osds: 11 up, 11 in
2014-01-13 12:18:48.793016 mon.0 10.10.10.201:6789/0 304 : [INF] pgmap v40647: 512 pgs: 453 active+clean, 13 active+remapped, 46 active+degraded; 0 bytes data, 838 MB used, 40965 GB / 40966 GB avail
2014-01-13 12:18:49.686309 mon.0 10.10.10.201:6789/0 305 : [INF] osdmap e1116: 11 osds: 11 up, 11 in
2014-01-13 12:18:49.759685 mon.0 10.10.10.201:6789/0 306 : [INF] pgmap v40648: 512 pgs: 453 active+clean, 13 active+remapped, 46 active+degraded; 0 bytes data, 838 MB used, 40965 GB / 40966 GB avail
2014-01-13 12:18:52.995909 mon.0 10.10.10.201:6789/0 307 : [INF] pgmap v40649: 512 pgs: 463 active+clean, 13 active+remapped, 36 active+degraded; 0 bytes data, 838 MB used, 40965 GB / 40966 GB avail
2014-01-13 12:18:54.083569 mon.0 10.10.10.201:6789/0 308 : [INF] pgmap v40650: 512 pgs: 474 active+clean, 9 active+remapped, 29 active+degraded; 0 bytes data, 839 MB used, 40965 GB / 40966 GB avail
2014-01-13 12:18:55.254406 mon.0 10.10.10.201:6789/0 309 : [INF] pgmap v40651: 512 pgs: 512 active+clean; 0 bytes data, 841 MB used, 40965 GB / 40966 GB avail

 

If I use pool size=2, data movement start as soon as the pool gets marked out?

 

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux