Re: cephfs toofull

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Aug 29, 2016 at 12:53 AM, Christian Balzer <chibi@xxxxxxx> wrote:
> On Mon, 29 Aug 2016 12:51:55 +0530 gjprabu wrote:
>
>> Hi Chrishtian,
>>
>>
>>
>>             Sorry for subject and thanks for your reply,
>>
>>
>>
>> &gt; That's incredibly small in terms of OSD numbers, how many hosts? What replication size?
>>
>>     Total host 5.
>>
>>     Replicated size : 2
>>
> At this replication size you need to act and replace/add OSDs NOW.
> The next OSD failure will result in data loss.
>
> So your RAW space is about 16TB, leaving you with 8TB of usable space.
>
> Which doesn't mesh with your "df", showing the ceph FS with 11TB used...

When you run df against a CephFS mount, it generally reports the same
data as you get out of RADOS — so if you have replica 2 and 4 TB of
data, it will report as 8TB used (since, after all, you have used
8TB!). There are exceptions in a few cases; you can have it based off
of your quotas for subtree mounts for one.
-Greg
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux