Please provide us with crushmap * sudo ceph osd getcrushmap -o crushmap.`date +%Y%m%d%H` On Fri, Feb 10, 2017 at 5:46 AM, Craig Read <craig@xxxxxxxxxxxxxxxx> wrote: > Sorry, 2 nodes, 6 daemons (forgot I added 2 daemons to see if it made a difference) > > On CentOS7 > > Ceph -v: > > 10.2.5 > > Ceph -s: > > Health HEALTH_WARN > 64 pgs stuck unclean > Too few PGs per OSD (21 < min 30) > Monmap e1: 1 mons at {<hostname>=<ip>:6789/0} > Election epoch 3, quorum 0 <hostname> > Osdmap e89: 6 osds: 6 up, 6 in; 64 remapped pgs > Flags sortbitwise,require_jewel_osds > Pgmap v263: 64pgs, 1 pools, 0 bytes data, 0 objects > 209 MB used, 121GB / 121GB avail > 32 active+remapped > 32 active > > Ceph osd tree: > > -1 0.11899 root default > -2 0.05949 Host 1: > 0 0.00490 Osd.0 up 1.00000 1.00000 > 3 0.01070 Osd.3 up 1.00000 1.00000 > 4 0.04390 Osd.4 up.100000 1.00000 > > -3 0.05949 Host 2: > 1 0.00490 Osd.1 up 1.00000 1.00000 > 2 0.01070 Osd.2 up 1.00000 1.00000 > 5 0.04390 Osd.5 up1.00000 1.00000 > > > Appreciate your help > > Craig > > -----Original Message----- > From: Shinobu Kinjo [mailto:skinjo@xxxxxxxxxx] > Sent: Thursday, February 9, 2017 2:34 PM > To: Craig Read <craig@xxxxxxxxxxxxxxxx> > Cc: ceph-users@xxxxxxxxxxxxxx > Subject: Re: OSDs stuck unclean > > 4 OSD nodes or daemons? > > please: > > * ceph -v > * ceph -s > * ceph osd tree > > > On Fri, Feb 10, 2017 at 5:26 AM, Craig Read <craig@xxxxxxxxxxxxxxxx> wrote: >> We have 4 OSDs in test environment that are all stuck unclean >> >> >> >> I’ve tried rebuilding the whole environment with the same result. >> >> >> >> OSDs are running on XFS disk, partition 1 is OSD, partition 2 is journal >> >> >> >> Also seeing degraded despite having 4 OSDs and a default osd pool of 2 >> >> >> _______________________________________________ >> ceph-users mailing list >> ceph-users@xxxxxxxxxxxxxx >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com >> _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com