Re: Pgs stuck on undersized+degraded+peered

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

it's useful to generally provide some detail around the setup, like:
What are your pool settings - size and min_size?
What is your failure domain - osd or host?
What version of ceph are you running on which OS?

You can check which specific PGs are problematic by running "ceph health detail" and then you can use "ceph pg x.y query" (where x.y is a problematic PG identified from ceph health).
http://docs.ceph.com/docs/jewel/rados/troubleshooting/troubleshooting-pg/ might provide you some pointers.

One obvious fix would be to get your 3rd osd server up and running again - but I guess you're already working on this.

Cheers
Christian

On Sat, Dec 10, 2016 at 7:25 AM, fridifree <fridifree@xxxxxxxxx> wrote:
Hi, 
1 of 3 of my osd servers is down and I get this error
And I do not have any access to rbds on the cluster

Any suggestions? 

Thank you 

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux