Sure, here it is: # ceph osd tree ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY -1 0.04997 root default -2 0.04997 host ceph01 0 0.00999 osd.0 up 1.00000 1.00000 1 0.00999 osd.1 down 0 1.00000 3 0.00999 osd.3 up 1.00000 1.00000 4 0.00999 osd.4 down 0 1.00000 2 0.00999 osd.2 down 0 1.00000 root@ceph01:~# ceph osd df ID WEIGHT REWEIGHT SIZE USE AVAIL %USE VAR 0 0.00999 1.00000 15348M 14920M 428M 97.21 1.00 1 0.00999 0 0 0 0 0 0 3 0.00999 1.00000 15348M 14918M 430M 97.19 1.00 4 0.00999 0 0 0 0 0 0 2 0.00999 0 0 0 0 0 0 TOTAL 30697M 29839M 858M 97.20 MIN/MAX VAR: 0/1.00 STDDEV: 0.01 root@ceph01:~# ceph df GLOBAL: SIZE AVAIL RAW USED %RAW USED 30697M 858M 29839M 97.20 POOLS: NAME ID USED %USED MAX AVAIL OBJECTS rbd 0 15360M 50.04 1070M 3842 And the size(replicas) of the pool is 2, min_size is 1 ------------------ hzwulibin 2016-03-10 ------------------------------------------------------------- 发件人:Shinobu Kinjo <shinobu.kj@xxxxxxxxx> 发送日期:2016-03-10 11:28 收件人:hzwulibin 抄送:ceph-devel,ceph-users 主题:Re: New added OSD always down when full flag of osdmap is set Can you provide us with: sudo ceph osd tree sudo ceph osd df sudo ceph df Cheers, S On Thu, Mar 10, 2016 at 11:58 AM, hzwulibin <hzwulibin@xxxxxxxxx> wrote: > No, just 98%. > > Another scene, if backfill too full, also could not add the osd in and up. > > ------------------ > hzwulibin > 2016-03-10 > > ------------------------------------------------------------- > 发件人:Shinobu Kinjo <shinobu.kj@xxxxxxxxx> > 发送日期:2016-03-10 10:49 > 收件人:hzwulibin > 抄送:ceph-devel,ceph-users > 主题:Re: New added OSD always down when full flag of osdmap > is set > > On Thu, Mar 10, 2016 at 11:37 AM, hzwulibin <hzwulibin@xxxxxxxxx> wrote: >> Hi, cephers >> >> Recently, i found we could not add new osd in cluster when the full flag is set in osdmap. >> >> Shortly describe the scene here: >> 1. Some osd are full, and the osdmap has full flag > > Are those OSDs usage really 100% which is not expected? > >> 2. Add new osd >> 3. New osd service is running, but it's state always down >> >> Here is the issue: >> http://tracker.ceph.com/issues/15025 >> >> Anyone know it's a normal way or a bug here? >> >> -------------- >> hzwulibin >> 2016-03-10 >> _______________________________________________ >> ceph-users mailing list >> ceph-users@xxxxxxxxxxxxxx >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > > > > -- > Email: > shinobu@xxxxxxxxx > GitHub: > shinobu-x > Blog: > Life with Distributed Computational System based on OpenSource > > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com -- Email: shinobu@xxxxxxxxx GitHub: shinobu-x Blog: Life with Distributed Computational System based on OpenSource _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com