Hello, First of all, the subject is misleading. It doesn't matter if you're using CephFS, the toofull status is something that OSDs are in. On Mon, 29 Aug 2016 12:06:21 +0530 gjprabu wrote: > > > Hi All, > > > > We are new with cephfs and we have 5 OSD and each size has 3.3TB. That's incredibly small in terms of OSD numbers, how many hosts? What replication size? Something doesn't add up here, at 5x 3.3TB you would have 16.5TB and with 12TB used that would imply a replication of 1... And also quite large in terms of OSD size, especially with this configuration. >As of now data has been stored around 12 TB size, unfortunately osd5 went down and while remapped+backfill below error is showing even though we have around 2TB free spaces. Kindly provide the solution to solve this issue. Don't build a Ceph cluster that can't survive a node or OSD failure. Set your near full and full ratios accordingly. And the obvious solution for your current problem is of course to re-add the dead OSD if possible or to add more OSDs (way more OSDs). Christian > > > > cluster b466e09c-f7ae-4e89-99a7-99d30eba0a13 > > health HEALTH_WARN > > 51 pgs backfill_toofull > > 57 pgs degraded > > 57 pgs stuck unclean > > 57 pgs undersized > > recovery 1139974/17932626 objects degraded (6.357%) > > recovery 1139974/17932626 objects misplaced (6.357%) > > 3 near full osd(s) > > monmap e2: 3 mons at {intcfs-mon1=192.168.113.113:6789/0,intcfs-mon2=192.168.113.114:6789/0,intcfs-mon3=192.168.113.72:6789/0} > > election epoch 10, quorum 0,1,2 intcfs-mon3,intcfs-mon1,intcfs-mon2 > > fsmap e26: 1/1/1 up {0=intcfs-osd1=up:active}, 1 up:standby > > osdmap e2349: 5 osds: 4 up, 4 in; 57 remapped pgs > > flags sortbitwise > > pgmap v681178: 564 pgs, 3 pools, 5811 GB data, 8756 kobjects > > 11001 GB used, 2393 GB / 13394 GB avail > > 1139974/17932626 objects degraded (6.357%) > > 1139974/17932626 objects misplaced (6.357%) > > 506 active+clean > > 51 active+undersized+degraded+remapped+backfill_toofull > > 6 active+undersized+degraded+remapped > > 1 active+clean+scrubbing > > > > > > 192.168.113.113,192.168.113.114,192.168.113.72:6789:/ ceph 14T 11T 2.4T 83% /home/build/cephfsdownloads > > > > > > Regards > > Prabu GJ > > -- Christian Balzer Network/Systems Engineer chibi@xxxxxxx Global OnLine Japan/Rakuten Communications http://www.gol.com/ _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com