Did you configure your crush map to have that hierarchy of region, datacenter, room, row, rack, and chassis? If you're using the default crush map, then it has no idea about any of those places/locations. I don't know what the crush map would look like after using that syntax if the crush map didn't have them to begin with.
On Thu, Sep 14, 2017 at 2:55 AM dE . <de.techno@xxxxxxxxx> wrote:
Ok, removed this line and it got fixed --But why will it matter?
crush location = "region=XX datacenter=XXXX room=NNNN row=N rack=N chassis=N"_______________________________________________On Thu, Sep 14, 2017 at 12:11 PM, dE . <de.techno@xxxxxxxxx> wrote:Thanks for any help!But all OSDs are up and in.From the documentation --I create a pool with size 1 and min_size 1 and PG of 1, or 2 or 3 or any no. However I cannot write objects to the cluster. The PGs are stuck in an unknown state --Hi,In my test cluster where I've just 1 OSD which's up and in --
1 osds: 1 up, 1 in
ceph -c /etc/ceph/cluster.conf health detail
HEALTH_WARN Reduced data availability: 2 pgs inactive; Degraded data redundancy: 2 pgs unclean; too few PGs per OSD (2 < min 30)
PG_AVAILABILITY Reduced data availability: 2 pgs inactive
pg 1.0 is stuck inactive for 608.785938, current state unknown, last acting []
pg 1.1 is stuck inactive for 608.785938, current state unknown, last acting []
PG_DEGRADED Degraded data redundancy: 2 pgs unclean
pg 1.0 is stuck unclean for 608.785938, current state unknown, last acting []
pg 1.1 is stuck unclean for 608.785938, current state unknown, last acting []
TOO_FEW_PGS too few PGs per OSD (2 < min 30)
Placement groups are in an unknown state, because the OSDs that host them have not reported to the monitor cluster in a while.
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com