unknown PG state in a newly created pool.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,
    In my test cluster where I've just 1 OSD which's up and in --

1 osds: 1 up, 1 in

I create a pool with size 1 and min_size 1 and PG of 1, or 2 or 3 or any no. However I cannot write objects to the cluster. The PGs are stuck in an unknown state --

ceph -c /etc/ceph/cluster.conf health detail
HEALTH_WARN Reduced data availability: 2 pgs inactive; Degraded data redundancy: 2 pgs unclean; too few PGs per OSD (2 < min 30)
PG_AVAILABILITY Reduced data availability: 2 pgs inactive
    pg 1.0 is stuck inactive for 608.785938, current state unknown, last acting []
    pg 1.1 is stuck inactive for 608.785938, current state unknown, last acting []
PG_DEGRADED Degraded data redundancy: 2 pgs unclean
    pg 1.0 is stuck unclean for 608.785938, current state unknown, last acting []
    pg 1.1 is stuck unclean for 608.785938, current state unknown, last acting []
TOO_FEW_PGS too few PGs per OSD (2 < min 30)

From the documentation --
Placement groups are in an unknown state, because the OSDs that host them have not reported to the monitor cluster in a while.

But all OSDs are up and in.

Thanks for any help!
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux