root@testk8s1:~# ceph osd pool ls detail
pool 0 'rbd' replicated size 3 min_size 2 crush_ruleset 0 object_hash rjenkins pg_num 64 pgp_num 64 last_change 1 flags hashpspool stripe_width 0
pool 1 'cephfs_data' replicated size 3 min_size 2 crush_ruleset 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 12 flags hashpspool crash_replay_interval 45 stripe_width 0
pool 2 'cephfs_metadata' replicated size 3 min_size 2 crush_ruleset 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 11 flags hashpspool stripe_width 0
I haven't changed any crush rule. Here's the dump: root@testk8s1:~# ceph osd crush rule dump
[
{
"rule_id": 0,
"rule_name": "replicated_ruleset",
"ruleset": 0,
"type": 1,
"min_size": 1,
"max_size": 10,
"steps": [
{
"op": "take",
"item": -1,
"item_name": "default"
},
{
"op": "chooseleaf_firstn",
"num": 0,
"type": "host"
},
{
"op": "emit"
}
]
}
]
kind regards, Grigori
От: Paul Emmerich <paul.emmerich@xxxxxxxx>
Отправлено: 7 июня 2018 г. 18:26 Кому: Grigori Frolov Копия: ceph-users@xxxxxxxxxxxxxx Тема: Re: I/O hangs when one of three nodes is down can you post your pool configuration?
ceph osd pool ls detail
and the crush rule if you modified it.
Paul
2018-06-07 14:52 GMT+02:00 Фролов Григорий
<gfrolov@xxxxxxxxx>:
-- Paul Emmerich
Looking for help with your Ceph cluster? Contact us at https://croit.io croit GmbH Freseniusstr. 31h 81247 München www.croit.io Tel: +49 89 1896585 90 |
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com