Re: cephfs df with EC pool

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi John,

Many thanks for your reply.

Glad to hear there is a ticket for this. But also glad that it's not a
"show stopper", just an inconvenience :)

best,

Jake



On 28/06/17 12:29, John Spray wrote:
> On Wed, Jun 28, 2017 at 12:19 PM, Jake Grimmett <jog@xxxxxxxxxxxxxxxxx> wrote:
>> Dear All,
>>
>> Sorry is this has been covered before, but is it possible to configure
>> cephfs to report free space based on what is available in the main
>> storage tier?
> 
> There's a ticket for this here: http://tracker.ceph.com/issues/19109
> 
> Reporting the whole cluster's df is the general case (in case the user
> is using multiple different pools with CephFS): having it recognise
> when only one pool is in use and reporting that pools stats is a
> special case to be added.
> 
> John
> 
>> My "df" shows 76%, this gives a false sense of security, when the EC
>> tier is 93% full...
>>
>> i.e. # df -h /ceph
>> Filesystem      Size  Used Avail Use% Mounted on
>> ceph-fuse       440T  333T  108T  76% /ceph
>>
>> # ls -lhd /ceph
>> drwxr-xr-x 1 root root 254T Jun 27 17:03 /ceph
>>
>> but "ceph df" shows that our EC pool is %92.46 full.
>>
>> # ceph df
>> GLOBAL:
>>     SIZE     AVAIL     RAW USED     %RAW USED
>>     439T      107T         332T         75.57
>> POOLS:
>>     NAME         ID     USED     %USED     MAX AVAIL     OBJECTS
>>     rbd          0         0         0          450G             0
>>     ecpool       1      255T     92.46        21334G     105148577
>>     hotpool      2      818G     64.53          450G        236023
>>     metapool     3      274M      0.06          450G       2583306
>>
>>
>> Other info:
>> We are using Luminous 12.1.0, with a small NVMe replicated pool, pouring
>> data into a large erasure coded pool. OS is SL 7.3.
>> Snapshots are enabled, and taken hourly, but very little data has been
>> deleted from the system.
>>
>> Hardware:
>> Six nodes, each have 10 x 8TB osd, ec (4+1)
>> Two nodes, each with 2 x 800GB NVMe (2x for metadata+top tier)
>>
>> any thoughts appreciated...
>>
>> Jake
>>
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@xxxxxxxxxxxxxx
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux