Re: unknown PG state in a newly created pool.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Did you configure your crush map to have that hierarchy of region, datacenter, room, row, rack, and chassis?  If you're using the default crush map, then it has no idea about any of those places/locations.  I don't know what the crush map would look like after using that syntax if the crush map didn't have them to begin with.

On Thu, Sep 14, 2017 at 2:55 AM dE . <de.techno@xxxxxxxxx> wrote:
Ok, removed this line and it got fixed --
crush location = "region=XX datacenter=XXXX room=NNNN row=N rack=N chassis=N"
But why will it matter?

On Thu, Sep 14, 2017 at 12:11 PM, dE . <de.techno@xxxxxxxxx> wrote:
Hi,
    In my test cluster where I've just 1 OSD which's up and in --

1 osds: 1 up, 1 in

I create a pool with size 1 and min_size 1 and PG of 1, or 2 or 3 or any no. However I cannot write objects to the cluster. The PGs are stuck in an unknown state --

ceph -c /etc/ceph/cluster.conf health detail
HEALTH_WARN Reduced data availability: 2 pgs inactive; Degraded data redundancy: 2 pgs unclean; too few PGs per OSD (2 < min 30)
PG_AVAILABILITY Reduced data availability: 2 pgs inactive
    pg 1.0 is stuck inactive for 608.785938, current state unknown, last acting []
    pg 1.1 is stuck inactive for 608.785938, current state unknown, last acting []
PG_DEGRADED Degraded data redundancy: 2 pgs unclean
    pg 1.0 is stuck unclean for 608.785938, current state unknown, last acting []
    pg 1.1 is stuck unclean for 608.785938, current state unknown, last acting []
TOO_FEW_PGS too few PGs per OSD (2 < min 30)

From the documentation --
Placement groups are in an unknown state, because the OSDs that host them have not reported to the monitor cluster in a while.

But all OSDs are up and in.

Thanks for any help!

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux