Hi Gregory,
My doubt has been cleared , by default cephfs will allow 82% of data to store and we can increase this value using osd_backfill_full_ratio.
Regards
Prabu GJ
---- On Tue, 30 Aug 2016 17:05:34 +0530 gjprabu <gjprabu@xxxxxxxxxxxx>wrote ----
Hi Gregory,Our cause we have 6TB data and replica 2 and its around 12TB size occupied, still i have remaining 4TB even though it says this error.51 active+undersized+degraded+remapped+backfill_toofullRegardsPrabu GJOn Mon, Aug 29, 2016 at 12:53 AM, Christian Balzer <chibi@xxxxxxx> wrote:> On Mon, 29 Aug 2016 12:51:55 +0530 gjprabu wrote:>>> Hi Chrishtian,>>>>>>>> Sorry for subject and thanks for your reply,>>>>>>>> > That's incredibly small in terms of OSD numbers, how many hosts? What replication size?>>>> Total host 5.>>>> Replicated size : 2>>> At this replication size you need to act and replace/add OSDs NOW.> The next OSD failure will result in data loss.>> So your RAW space is about 16TB, leaving you with 8TB of usable space.>> Which doesn't mesh with your "df", showing the ceph FS with 11TB used...When you run df against a CephFS mount, it generally reports the samedata as you get out of RADOS — so if you have replica 2 and 4 TB ofdata, it will report as 8TB used (since, after all, you have used8TB!). There are exceptions in a few cases; you can have it based offof your quotas for subtree mounts for one.-Greg_______________________________________________ceph-users mailing list
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com