Hi,
On 06/02/2017 04:15 PM, Oleg Obleukhov
wrote:
Hello,
I am playing around with ceph (ceph version 10.2.7
(50e863e0f4bc8f4b9e31156de690d765af245185)) on Debian Jessie and I
build a test setup:
$ ceph osd tree
ID WEIGHT TYPE NAME
UP/DOWN REWEIGHT PRIMARY-AFFINITY
-1 0.01497 root
default
-2 0.00499 host
af-staging-ceph01
0 0.00499
osd.0 up 1.00000 1.00000
-3 0.00499 host
af-staging-ceph02
1 0.00499
osd.1 up 1.00000 1.00000
-4 0.00499 host
af-staging-ceph03
2 0.00499
osd.2 up 1.00000 1.00000
So I have 3 osd on 3 servers.
I also created 2 pools:
ceph osd dump | grep
'replicated size'
pool 1 'cephfs_data'
replicated size 3 min_size 2 crush_ruleset 0 object_hash
rjenkins pg_num 32 pgp_num 32 last_change 33 flags
hashpspool crash_replay_interval 45 stripe_width 0
pool 2
'cephfs_metadata' replicated size 3 min_size 2 crush_ruleset
0 object_hash rjenkins pg_num 32 pgp_num 32 last_change 31
flags hashpspool stripe_width 0
*snipsnap*
And if I kill one more node I lose access to mounted
file system on client.
Normally I would expect replica-factor to be
respected and ceph should create the missing copies of degraded
pg.
I was trying to rebuild the crush map and it looks
like this, but this did not help:
rule
replicated_ruleset {
ruleset
0
type
replicated
min_size
1
max_size
10
step
take default
step
chooseleaf firstn 0 type osd
step
emit
}
# end crush map
3 Hosts, 3 OSDs, one host down, one OSD not
available -> 2 OSDs left.
Where exactly do you expect ceph to put the third replicate with
only two hosts available?
Regards,
Burkhard
|
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com