Just two little notes here :) below
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256
I suggest setting logging to 0/5 on everything. Depending on your desire for reliability and availability, you may want to change your pool min_size/size to 2/4 and adjust your CRUSH map to include rack. Then instruct CRUSH to place two copies in each rack. That way if you lose power to a rack, you can still continue with minimal interruption.
You would want a rule similar to this:
rule replicated_ruleset {
ruleset 0
type replicated
min_size 1
max_size 10
step take default
step choose firstn 2 type rack
step chooseleaf firstn 2 type host
step emit
}
I would also set:
mon osd downout subtree limit = host
This option bit me pretty hard. I rebooted one node (with noout unset) and it didn't try to recover. But after I started just one OSD it started recovering because the "host" object was void (or what?). Looks like unless all your OSDs come back at the same time it will try to wreak havoc and rebalance things anyway. so that if you lose power in a rack it won't try to recover. If you only have two racks, this is not an issue. If you move to three racks, then you can adjust the min_size/size to 2/3 and adjust the rule to:
rule replicated_ruleset {
ruleset 0
type replicated
min_size 1
max_size 10
step take default
step chooseleaf firstn 0 type rack
step emit
}
Other than that, the defaults are pretty good.
They are not in respect to durability (again, can be found somewhere on this list), you should make a richer hierarchy that limits the replicas more in locality so that a double or tripple failure doesn't eat your data. Not an issue in this case, just saying...
-----BEGIN PGP SIGNATURE-----
Version: Mailvelope v1.0.2
Comment: https://www.mailvelope.com
wsFcBAEBCAAQBQJV32vaCRDmVDuy+mK58QAAt3EP/0VPChXtbijtIXZmItuG
H+e4moCAfsu5dLAfpdorZOEivjh2xVdni9XlHlBE8Qm7UmfpyycP1SUST8bd
3BcI2xC0xlV0xJShJcoL5+vXyVZYPhrSKdooCuo5coYhRZOtSqg86uVojpHA
8hy0eLVd8qXKjvqvQJBIDZXQP41Ct6UoejT+sP7JuepH9SWb+0c61+TpOCQm
BSTraapfyqNxo5y40FI7pM7E0EZw1H3Ag8Ie1HiQ3NfbkVQ4N4KMmRGzsCzl
QpZB/gAkUmdpJptRUzo2habaLzl0szuaXiP/JnFE8Vu5H2GnrsFelHfOnQQx
hrEhqfVXtZ7oCQLYy0N+KpgfAf9b7+2kA9Tm8Ztx+nw8YOgAPrWheFUj9Jjs
Ry9dK/J9toaKAXfW12EKiU+qNKOgHYKEn+FSR+y+y7UJSbexhmeUhPy5S4Jt
he1KJMUe7BnGRuFM/94vCCApAgqoHiatpFeKY7cEd6x0V3YOA+j8MDbr5YWJ
PCWXWyFpClyp9h9LW0uqlwE3LtYBD0ec3d4nJmqNy5v2sszWJo4UWptRhEdi
XOwoda3DNnqoj5G7dmKkSrvXJqSRXA784gIMD0rO7JfXlahjCOsVaYQdo76v
U+bQtxGRTXTAV+1ygOL7rElXMyc4Wo6IyUkpE6dnhFPGsi0lZnOih+kM0Wmt
wt/B
=mSex
-----END PGP SIGNATURE-----
|
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com