I will try to change the replication size now as you suggested .. but how is that related to the non-healthy cluster?
Hi Vickie,
My OSD tree looks like this:
ceph@ceph-node3:/home/ubuntu$ ceph osd tree # id weight type name up/down reweight -1 0 root default -2 0 host ceph-node1 0 0 osd.0 up 1 1 0 osd.1 up 1 -3 0 host ceph-node3 2 0 osd.2 up 1 3 0 osd.3 up 1 -4 0 host ceph-node2 4 0 osd.4 up 1 5 0 osd.5 up 1
Hi Beanos:
BTW, if your cluster just for test. You may try to reduce replica size and min_size.
"ceph osd pool set rbd size 2;ceph osd pool set data size 2;ceph osd pool set metadata size 2 " "ceph osd pool set rbd min_size 1;ceph osd pool set data min_size 1;ceph osd pool set metadata min_size 1"
Open another terminal and use command "ceph -w" watch pg and pgs status .
|
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com