ceph df space usage confusion - balancing needed?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Dear Cephalopodians,

as many others, I'm also a bit confused by "ceph df" output
in a pretty straightforward configuration. 

We have a CephFS (12.2.7) running, with 4+2 EC profile. 

I get:
----------------------------------------------------------------------------
# ceph df
GLOBAL:
    SIZE     AVAIL     RAW USED     %RAW USED 
    824T      410T         414T         50.26 
POOLS:
    NAME                ID     USED     %USED     MAX AVAIL     OBJECTS  
    cephfs_metadata     1      452M      0.05          860G       365774 
    cephfs_data         2      275T     62.68          164T     75056403
----------------------------------------------------------------------------

So about 50 % of raw space are used, but already ~63 % of filesystem space are used. 
Is this purely from imperfect balancing? 
In "ceph osd df", I do indeed see OSD usages spreading from 65.02 % usage down to 37.12 %. 

We did not yet use the balancer plugin. 
We don't have any pre-luminous clients. 
In that setup, I take it that "upmap" mode would be recommended - correct? 
Any "gotchas" using that on luminous? 

Cheers,
	Oliver

Attachment: smime.p7s
Description: S/MIME Cryptographic Signature

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux