Hi,
I was testing RadosGW setup and observed strange behavior - RGW becomes
unresponsive or won't start whenever cluster health is degraded (e.g.
restarting one of the OSDs). Probably I'm doing something wrong but I
couldn't find any information about this.
I'm running 0.56.3 on 3 node cluster (3xMON, 3xOSD). I increased
replication factor for rgw related pools so that cluster can survive
single node failure (quorum).
pool 0 'data' rep size 2 crush_ruleset 0 object_hash rjenkins pg_num 256
pgp_num 256 last_change 1 owner 0 crash_replay_interval 45
pool 1 'metadata' rep size 2 crush_ruleset 1 object_hash rjenkins pg_num
256 pgp_num 256 last_change 1 owner 0
pool 2 'rbd' rep size 2 crush_ruleset 2 object_hash rjenkins pg_num 256
pgp_num 256 last_change 1 owner 0
pool 3 'pbench' rep size 3 crush_ruleset 0 object_hash rjenkins pg_num
150 pgp_num 150 last_change 11 owner 0
pool 4 '.rgw' rep size 3 crush_ruleset 0 object_hash rjenkins pg_num 90
pgp_num 8 last_change 111 owner 0
pool 5 '.rgw.gc' rep size 3 crush_ruleset 0 object_hash rjenkins pg_num
8 pgp_num 8 last_change 112 owner 0
pool 6 '.rgw.control' rep size 3 crush_ruleset 0 object_hash rjenkins
pg_num 8 pgp_num 8 last_change 114 owner 0
pool 7 '.users.uid' rep size 3 crush_ruleset 0 object_hash rjenkins
pg_num 8 pgp_num 8 last_change 117 owner 0
pool 8 '.users.email' rep size 3 crush_ruleset 0 object_hash rjenkins
pg_num 8 pgp_num 8 last_change 118 owner 0
pool 9 '.users' rep size 3 crush_ruleset 0 object_hash rjenkins pg_num 8
pgp_num 8 last_change 115 owner 0
pool 11 '.rgw.buckets' rep size 3 crush_ruleset 0 object_hash rjenkins
pg_num 1024 pgp_num 1024 last_change 108 owner 0
Any idea how to fix this?
--
Rustam.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com