Re: Placement Groups fail on fresh Ceph cluster installation with all OSDs up and in

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On Feb 10, 2015, at 12:37 PM, B L <super.iterator@xxxxxxxxx> wrote:

Hi Vickie,

Thanks for your reply!

You can find the dump in this link:


Thanks!
B.


On Feb 10, 2015, at 12:23 PM, Vickie ch <mika.leaf666@xxxxxxxxx> wrote:

Hi Beanos:
   Would you post the reult of "$ceph osd dump"?

Best wishes,
Vickie

2015-02-10 16:36 GMT+08:00 B L <super.iterator@xxxxxxxxx>:
Having problem with my fresh non-healthy cluster, my cluster status summary shows this:

ceph@ceph-node1:~$ ceph -s

    cluster 17bea68b-1634-4cd1-8b2a-00a60ef4761d
     health HEALTH_WARN 256 pgs incomplete; 256 pgs stuck inactive; 256 pgs stuck unclean; pool data pg_num 128 > pgp_num 64
     monmap e1: 1 mons at {ceph-node1=172.31.0.84:6789/0}, election epoch 2, quorum 0 ceph-node1
     osdmap e25: 6 osds: 6 up, 6 in
      pgmap v82: 256 pgs, 3 pools, 0 bytes data, 0 objects
            198 MB used, 18167 MB / 18365 MB avail
                 192 incomplete
                  64 creating+incomplete


Where shall I start troubleshooting this?

P.S. I’m new to CEPH.

Thanks!
Beanos

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux