Re: creating+incomplete issues

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Please paste 'ceph osd tree'.

Robert LeBlanc

Sent from a mobile device please excuse any typos.

On Oct 28, 2015 6:54 PM, "Wah Peng" <wah_peng@xxxxxxxxxxxx> wrote:
Hello,

Just did it, but still no good health. can you help? thanks.

ceph@ceph:~/my-cluster$ ceph osd stat
     osdmap e24: 3 osds: 3 up, 3 in

ceph@ceph:~/my-cluster$ ceph health
HEALTH_WARN 89 pgs degraded; 67 pgs incomplete; 67 pgs stuck inactive; 192 pgs stuck unclean


On 2015/10/29 星期四 8:38, Lindsay Mathieson wrote:

On 29 October 2015 at 10:29, Wah Peng <wah_peng@xxxxxxxxxxxx
<mailto:wah_peng@xxxxxxxxxxxx>> wrote:

    $ ceph osd stat
          osdmap e18: 2 osds: 2 up, 2 in

    this is what it shows.
    does it mean I need to add up to 3 osds? I just use  the default setup.


If you went with the defaults then your pool size will be 3, meaning it
needs 3 copies of the data (replica 3) to be valid - as you only have
two nodes/osd's that can never happen :)

Your options are:
- Add another node and osd.
or
- reduce the size to 2.(ceph osd set <poolname> size 2)



--
Lindsay
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux