The min size was on 3 changing to 1 solve the problem
thanks
On Dec 10, 2016 02:06, "Christian Wuerdig" <christian.wuerdig@xxxxxxxxx> wrote:
ChristianCheersOne obvious fix would be to get your 3rd osd server up and running again - but I guess you're already working on this.Hi,it's useful to generally provide some detail around the setup, like:What are your pool settings - size and min_size?
What is your failure domain - osd or host?What version of ceph are you running on which OS?You can check which specific PGs are problematic by running "ceph health detail" and then you can use "ceph pg x.y query" (where x.y is a problematic PG identified from ceph health).
http://docs.ceph.com/docs/jewel/rados/troubleshooting/ might provide you some pointers.troubleshooting-pg/ On Sat, Dec 10, 2016 at 7:25 AM, fridifree <fridifree@xxxxxxxxx> wrote:Hi,1 of 3 of my osd servers is down and I get this errorAnd I do not have any access to rbds on the clusterAny suggestions?Thank you
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com